I0204 12:56:14.451655 8 e2e.go:243] Starting e2e run "dd3dcc5b-0f6b-485b-ba78-f7d520ec03e1" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1580820973 - Will randomize all specs Will run 215 of 4412 specs Feb 4 12:56:14.810: INFO: >>> kubeConfig: /root/.kube/config Feb 4 12:56:14.813: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Feb 4 12:56:14.839: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Feb 4 12:56:14.892: INFO: 10 / 10 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Feb 4 12:56:14.892: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Feb 4 12:56:14.892: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Feb 4 12:56:14.907: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Feb 4 12:56:14.907: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Feb 4 12:56:14.907: INFO: e2e test version: v1.15.7 Feb 4 12:56:14.909: INFO: kube-apiserver version: v1.15.1 SSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 4 12:56:14.909: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment Feb 4 12:56:14.975: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Feb 4 12:56:15.009: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Feb 4 12:56:27.136: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Feb 4 12:56:29.146: INFO: Creating deployment "test-rollover-deployment" Feb 4 12:56:29.187: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Feb 4 12:56:31.208: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Feb 4 12:56:31.217: INFO: Ensure that both replica sets have 1 created replica Feb 4 12:56:31.224: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Feb 4 12:56:31.235: INFO: Updating deployment test-rollover-deployment Feb 4 12:56:31.235: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Feb 4 12:56:33.258: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Feb 4 12:56:33.266: INFO: Make sure deployment "test-rollover-deployment" is complete Feb 4 12:56:33.274: INFO: all replica sets need to contain the pod-template-hash label Feb 4 12:56:33.274: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716417789, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716417789, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716417791, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716417789, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 4 12:56:35.290: INFO: all replica sets need to contain the pod-template-hash label Feb 4 12:56:35.291: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716417789, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716417789, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716417791, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716417789, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 4 12:56:37.287: INFO: all replica sets need to contain the pod-template-hash label Feb 4 12:56:37.287: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716417789, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716417789, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716417791, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716417789, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 4 12:56:39.376: INFO: all replica sets need to contain the pod-template-hash label Feb 4 12:56:39.376: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716417789, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716417789, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716417791, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716417789, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 4 12:56:41.284: INFO: all replica sets need to contain the pod-template-hash label Feb 4 12:56:41.284: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716417789, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716417789, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716417791, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716417789, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 4 12:56:43.290: INFO: all replica sets need to contain the pod-template-hash label Feb 4 12:56:43.290: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716417789, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716417789, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716417803, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716417789, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 4 12:56:45.284: INFO: all replica sets need to contain the pod-template-hash label Feb 4 12:56:45.285: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716417789, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716417789, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716417803, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716417789, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 4 12:56:47.295: INFO: all replica sets need to contain the pod-template-hash label Feb 4 12:56:47.295: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716417789, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716417789, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716417803, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716417789, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 4 12:56:49.290: INFO: all replica sets need to contain the pod-template-hash label Feb 4 12:56:49.290: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716417789, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716417789, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716417803, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716417789, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 4 12:56:51.290: INFO: all replica sets need to contain the pod-template-hash label Feb 4 12:56:51.291: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716417789, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716417789, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716417803, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716417789, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 4 12:56:53.296: INFO: Feb 4 12:56:53.296: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:2, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716417789, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716417789, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716417813, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716417789, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 4 12:56:55.287: INFO: Feb 4 12:56:55.287: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Feb 4 12:56:55.303: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:deployment-8915,SelfLink:/apis/apps/v1/namespaces/deployment-8915/deployments/test-rollover-deployment,UID:a00332f5-9816-4d90-805d-1609a6eeee08,ResourceVersion:23063370,Generation:2,CreationTimestamp:2020-02-04 12:56:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-02-04 12:56:29 +0000 UTC 2020-02-04 12:56:29 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-02-04 12:56:53 +0000 UTC 2020-02-04 12:56:29 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-854595fc44" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Feb 4 12:56:55.318: INFO: New ReplicaSet "test-rollover-deployment-854595fc44" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44,GenerateName:,Namespace:deployment-8915,SelfLink:/apis/apps/v1/namespaces/deployment-8915/replicasets/test-rollover-deployment-854595fc44,UID:56098afb-136e-48a6-86de-e97a13ee7965,ResourceVersion:23063359,Generation:2,CreationTimestamp:2020-02-04 12:56:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment a00332f5-9816-4d90-805d-1609a6eeee08 0xc00307b057 0xc00307b058}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Feb 4 12:56:55.318: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Feb 4 12:56:55.318: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:deployment-8915,SelfLink:/apis/apps/v1/namespaces/deployment-8915/replicasets/test-rollover-controller,UID:4e76d57f-c37c-4d0d-99f3-1aa4e0459354,ResourceVersion:23063369,Generation:2,CreationTimestamp:2020-02-04 12:56:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment a00332f5-9816-4d90-805d-1609a6eeee08 0xc00307af77 0xc00307af78}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Feb 4 12:56:55.319: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-9b8b997cf,GenerateName:,Namespace:deployment-8915,SelfLink:/apis/apps/v1/namespaces/deployment-8915/replicasets/test-rollover-deployment-9b8b997cf,UID:8b021035-162c-410f-8cb8-b45f3399515a,ResourceVersion:23063314,Generation:2,CreationTimestamp:2020-02-04 12:56:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment a00332f5-9816-4d90-805d-1609a6eeee08 0xc00307b2b0 0xc00307b2b1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Feb 4 12:56:55.328: INFO: Pod "test-rollover-deployment-854595fc44-g6bgs" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44-g6bgs,GenerateName:test-rollover-deployment-854595fc44-,Namespace:deployment-8915,SelfLink:/api/v1/namespaces/deployment-8915/pods/test-rollover-deployment-854595fc44-g6bgs,UID:aea24639-434d-4a33-bfb7-c697a8262eff,ResourceVersion:23063342,Generation:0,CreationTimestamp:2020-02-04 12:56:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-854595fc44 56098afb-136e-48a6-86de-e97a13ee7965 0xc0030fe717 0xc0030fe718}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7np2f {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7np2f,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-7np2f true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0030fe790} {node.kubernetes.io/unreachable Exists NoExecute 0xc0030fe7b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 12:56:32 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 12:56:43 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 12:56:43 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 12:56:31 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2020-02-04 12:56:32 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-02-04 12:56:42 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://14f2d594c693acff359112a81c68ca2534ca174c0ee9d50ac64e32dbbe00113b}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 4 12:56:55.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-8915" for this suite. Feb 4 12:57:03.689: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 12:57:03.937: INFO: namespace deployment-8915 deletion completed in 8.598390475s • [SLOW TEST:49.027 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 4 12:57:03.939: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-03033fad-3d76-428e-a233-211bc4d217ee STEP: Creating a pod to test consume secrets Feb 4 12:57:04.247: INFO: Waiting up to 5m0s for pod "pod-secrets-82e756b2-138c-4e4b-b4a7-fa47778ba2f1" in namespace "secrets-4564" to be "success or failure" Feb 4 12:57:04.368: INFO: Pod "pod-secrets-82e756b2-138c-4e4b-b4a7-fa47778ba2f1": Phase="Pending", Reason="", readiness=false. Elapsed: 120.596501ms Feb 4 12:57:06.374: INFO: Pod "pod-secrets-82e756b2-138c-4e4b-b4a7-fa47778ba2f1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.12677193s Feb 4 12:57:08.383: INFO: Pod "pod-secrets-82e756b2-138c-4e4b-b4a7-fa47778ba2f1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.135651179s Feb 4 12:57:10.416: INFO: Pod "pod-secrets-82e756b2-138c-4e4b-b4a7-fa47778ba2f1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.169246986s Feb 4 12:57:12.423: INFO: Pod "pod-secrets-82e756b2-138c-4e4b-b4a7-fa47778ba2f1": Phase="Pending", Reason="", readiness=false. Elapsed: 8.175379716s Feb 4 12:57:14.434: INFO: Pod "pod-secrets-82e756b2-138c-4e4b-b4a7-fa47778ba2f1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.186544625s STEP: Saw pod success Feb 4 12:57:14.434: INFO: Pod "pod-secrets-82e756b2-138c-4e4b-b4a7-fa47778ba2f1" satisfied condition "success or failure" Feb 4 12:57:14.437: INFO: Trying to get logs from node iruya-node pod pod-secrets-82e756b2-138c-4e4b-b4a7-fa47778ba2f1 container secret-volume-test: STEP: delete the pod Feb 4 12:57:14.553: INFO: Waiting for pod pod-secrets-82e756b2-138c-4e4b-b4a7-fa47778ba2f1 to disappear Feb 4 12:57:14.565: INFO: Pod pod-secrets-82e756b2-138c-4e4b-b4a7-fa47778ba2f1 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 4 12:57:14.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4564" for this suite. Feb 4 12:57:20.663: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 12:57:20.770: INFO: namespace secrets-4564 deletion completed in 6.137407146s STEP: Destroying namespace "secret-namespace-6038" for this suite. Feb 4 12:57:26.853: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 12:57:26.950: INFO: namespace secret-namespace-6038 deletion completed in 6.1791964s • [SLOW TEST:23.011 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 4 12:57:26.951: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-de1d716c-8b2e-44cc-8d53-fc068d8d9c8e in namespace container-probe-7011 Feb 4 12:57:37.095: INFO: Started pod liveness-de1d716c-8b2e-44cc-8d53-fc068d8d9c8e in namespace container-probe-7011 STEP: checking the pod's current state and verifying that restartCount is present Feb 4 12:57:37.098: INFO: Initial restart count of pod liveness-de1d716c-8b2e-44cc-8d53-fc068d8d9c8e is 0 Feb 4 12:57:57.211: INFO: Restart count of pod container-probe-7011/liveness-de1d716c-8b2e-44cc-8d53-fc068d8d9c8e is now 1 (20.113431533s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 4 12:57:57.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7011" for this suite. Feb 4 12:58:03.299: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 12:58:03.410: INFO: namespace container-probe-7011 deletion completed in 6.143045428s • [SLOW TEST:36.460 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 4 12:58:03.411: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-aea22422-847e-4310-9f60-6f1e1f1e39ab in namespace container-probe-1452 Feb 4 12:58:11.699: INFO: Started pod liveness-aea22422-847e-4310-9f60-6f1e1f1e39ab in namespace container-probe-1452 STEP: checking the pod's current state and verifying that restartCount is present Feb 4 12:58:11.711: INFO: Initial restart count of pod liveness-aea22422-847e-4310-9f60-6f1e1f1e39ab is 0 Feb 4 12:58:27.852: INFO: Restart count of pod container-probe-1452/liveness-aea22422-847e-4310-9f60-6f1e1f1e39ab is now 1 (16.141182033s elapsed) Feb 4 12:58:46.000: INFO: Restart count of pod container-probe-1452/liveness-aea22422-847e-4310-9f60-6f1e1f1e39ab is now 2 (34.288759397s elapsed) Feb 4 12:59:08.104: INFO: Restart count of pod container-probe-1452/liveness-aea22422-847e-4310-9f60-6f1e1f1e39ab is now 3 (56.392682584s elapsed) Feb 4 12:59:28.214: INFO: Restart count of pod container-probe-1452/liveness-aea22422-847e-4310-9f60-6f1e1f1e39ab is now 4 (1m16.502899699s elapsed) Feb 4 13:00:30.600: INFO: Restart count of pod container-probe-1452/liveness-aea22422-847e-4310-9f60-6f1e1f1e39ab is now 5 (2m18.889322991s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 4 13:00:30.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1452" for this suite. Feb 4 13:00:36.733: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 13:00:36.815: INFO: namespace container-probe-1452 deletion completed in 6.153655139s • [SLOW TEST:153.404 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 4 13:00:36.815: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's args Feb 4 13:00:36.918: INFO: Waiting up to 5m0s for pod "var-expansion-30219cc0-9f9b-4f99-a419-9f8c006aa113" in namespace "var-expansion-3558" to be "success or failure" Feb 4 13:00:36.927: INFO: Pod "var-expansion-30219cc0-9f9b-4f99-a419-9f8c006aa113": Phase="Pending", Reason="", readiness=false. Elapsed: 8.625111ms Feb 4 13:00:38.936: INFO: Pod "var-expansion-30219cc0-9f9b-4f99-a419-9f8c006aa113": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01798404s Feb 4 13:00:40.948: INFO: Pod "var-expansion-30219cc0-9f9b-4f99-a419-9f8c006aa113": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029777035s Feb 4 13:00:42.974: INFO: Pod "var-expansion-30219cc0-9f9b-4f99-a419-9f8c006aa113": Phase="Pending", Reason="", readiness=false. Elapsed: 6.055471175s Feb 4 13:00:44.983: INFO: Pod "var-expansion-30219cc0-9f9b-4f99-a419-9f8c006aa113": Phase="Pending", Reason="", readiness=false. Elapsed: 8.064823457s Feb 4 13:00:46.993: INFO: Pod "var-expansion-30219cc0-9f9b-4f99-a419-9f8c006aa113": Phase="Pending", Reason="", readiness=false. Elapsed: 10.074344323s Feb 4 13:00:48.999: INFO: Pod "var-expansion-30219cc0-9f9b-4f99-a419-9f8c006aa113": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.081092071s STEP: Saw pod success Feb 4 13:00:49.000: INFO: Pod "var-expansion-30219cc0-9f9b-4f99-a419-9f8c006aa113" satisfied condition "success or failure" Feb 4 13:00:49.003: INFO: Trying to get logs from node iruya-node pod var-expansion-30219cc0-9f9b-4f99-a419-9f8c006aa113 container dapi-container: STEP: delete the pod Feb 4 13:00:49.178: INFO: Waiting for pod var-expansion-30219cc0-9f9b-4f99-a419-9f8c006aa113 to disappear Feb 4 13:00:49.190: INFO: Pod var-expansion-30219cc0-9f9b-4f99-a419-9f8c006aa113 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 4 13:00:49.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-3558" for this suite. Feb 4 13:00:55.256: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 13:00:55.377: INFO: namespace var-expansion-3558 deletion completed in 6.178269813s • [SLOW TEST:18.562 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 4 13:00:55.378: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Feb 4 13:04:02.794: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 4 13:04:02.828: INFO: Pod pod-with-poststart-exec-hook still exists Feb 4 13:04:04.828: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 4 13:04:04.837: INFO: Pod pod-with-poststart-exec-hook still exists Feb 4 13:04:06.828: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 4 13:04:06.836: INFO: Pod pod-with-poststart-exec-hook still exists Feb 4 13:04:08.828: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 4 13:04:08.837: INFO: Pod pod-with-poststart-exec-hook still exists Feb 4 13:04:10.828: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 4 13:04:10.836: INFO: Pod pod-with-poststart-exec-hook still exists Feb 4 13:04:12.828: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 4 13:04:12.838: INFO: Pod pod-with-poststart-exec-hook still exists Feb 4 13:04:14.828: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 4 13:04:14.846: INFO: Pod pod-with-poststart-exec-hook still exists Feb 4 13:04:16.828: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 4 13:04:16.834: INFO: Pod pod-with-poststart-exec-hook still exists Feb 4 13:04:18.828: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 4 13:04:18.836: INFO: Pod pod-with-poststart-exec-hook still exists Feb 4 13:04:20.828: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 4 13:04:20.839: INFO: Pod pod-with-poststart-exec-hook still exists Feb 4 13:04:22.828: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 4 13:04:22.836: INFO: Pod pod-with-poststart-exec-hook still exists Feb 4 13:04:24.828: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 4 13:04:24.837: INFO: Pod pod-with-poststart-exec-hook still exists Feb 4 13:04:26.828: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 4 13:04:26.835: INFO: Pod pod-with-poststart-exec-hook still exists Feb 4 13:04:28.828: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 4 13:04:28.836: INFO: Pod pod-with-poststart-exec-hook still exists Feb 4 13:04:30.828: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 4 13:04:30.834: INFO: Pod pod-with-poststart-exec-hook still exists Feb 4 13:04:32.828: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 4 13:04:32.833: INFO: Pod pod-with-poststart-exec-hook still exists Feb 4 13:04:34.828: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 4 13:04:34.836: INFO: Pod pod-with-poststart-exec-hook still exists Feb 4 13:04:36.828: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 4 13:04:36.850: INFO: Pod pod-with-poststart-exec-hook still exists Feb 4 13:04:38.828: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 4 13:04:38.836: INFO: Pod pod-with-poststart-exec-hook still exists Feb 4 13:04:40.828: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 4 13:04:40.841: INFO: Pod pod-with-poststart-exec-hook still exists Feb 4 13:04:42.828: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 4 13:04:42.835: INFO: Pod pod-with-poststart-exec-hook still exists Feb 4 13:04:44.828: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 4 13:04:44.833: INFO: Pod pod-with-poststart-exec-hook still exists Feb 4 13:04:46.828: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 4 13:04:46.844: INFO: Pod pod-with-poststart-exec-hook still exists Feb 4 13:04:48.828: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 4 13:04:48.840: INFO: Pod pod-with-poststart-exec-hook still exists Feb 4 13:04:50.828: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 4 13:04:50.837: INFO: Pod pod-with-poststart-exec-hook still exists Feb 4 13:04:52.828: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 4 13:04:52.846: INFO: Pod pod-with-poststart-exec-hook still exists Feb 4 13:04:54.828: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 4 13:04:54.838: INFO: Pod pod-with-poststart-exec-hook still exists Feb 4 13:04:56.828: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 4 13:04:56.839: INFO: Pod pod-with-poststart-exec-hook still exists Feb 4 13:04:58.828: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 4 13:04:58.842: INFO: Pod pod-with-poststart-exec-hook still exists Feb 4 13:05:00.828: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 4 13:05:00.839: INFO: Pod pod-with-poststart-exec-hook still exists Feb 4 13:05:02.828: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 4 13:05:02.837: INFO: Pod pod-with-poststart-exec-hook still exists Feb 4 13:05:04.828: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 4 13:05:04.836: INFO: Pod pod-with-poststart-exec-hook still exists Feb 4 13:05:06.828: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 4 13:05:06.838: INFO: Pod pod-with-poststart-exec-hook still exists Feb 4 13:05:08.828: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 4 13:05:08.839: INFO: Pod pod-with-poststart-exec-hook still exists Feb 4 13:05:10.828: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 4 13:05:10.841: INFO: Pod pod-with-poststart-exec-hook still exists Feb 4 13:05:12.828: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 4 13:05:12.845: INFO: Pod pod-with-poststart-exec-hook still exists Feb 4 13:05:14.828: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 4 13:05:14.838: INFO: Pod pod-with-poststart-exec-hook still exists Feb 4 13:05:16.828: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 4 13:05:16.836: INFO: Pod pod-with-poststart-exec-hook still exists Feb 4 13:05:18.828: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 4 13:05:18.836: INFO: Pod pod-with-poststart-exec-hook still exists Feb 4 13:05:20.828: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 4 13:05:20.839: INFO: Pod pod-with-poststart-exec-hook still exists Feb 4 13:05:22.828: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 4 13:05:22.838: INFO: Pod pod-with-poststart-exec-hook still exists Feb 4 13:05:24.828: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 4 13:05:24.841: INFO: Pod pod-with-poststart-exec-hook still exists Feb 4 13:05:26.828: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 4 13:05:26.843: INFO: Pod pod-with-poststart-exec-hook still exists Feb 4 13:05:28.828: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 4 13:05:28.837: INFO: Pod pod-with-poststart-exec-hook still exists Feb 4 13:05:30.828: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 4 13:05:30.836: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 4 13:05:30.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-856" for this suite. Feb 4 13:05:50.878: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 13:05:50.966: INFO: namespace container-lifecycle-hook-856 deletion completed in 20.122947084s • [SLOW TEST:295.588 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 4 13:05:50.966: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-87da2271-0d01-4cc5-b70e-e7c137e0f6a3 STEP: Creating a pod to test consume secrets Feb 4 13:05:51.042: INFO: Waiting up to 5m0s for pod "pod-secrets-c0335c1d-4124-4440-b908-7be17665962a" in namespace "secrets-7427" to be "success or failure" Feb 4 13:05:51.050: INFO: Pod "pod-secrets-c0335c1d-4124-4440-b908-7be17665962a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.181616ms Feb 4 13:05:53.066: INFO: Pod "pod-secrets-c0335c1d-4124-4440-b908-7be17665962a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024015904s Feb 4 13:05:55.079: INFO: Pod "pod-secrets-c0335c1d-4124-4440-b908-7be17665962a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037092773s Feb 4 13:05:57.095: INFO: Pod "pod-secrets-c0335c1d-4124-4440-b908-7be17665962a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.053448761s Feb 4 13:05:59.104: INFO: Pod "pod-secrets-c0335c1d-4124-4440-b908-7be17665962a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.062313238s Feb 4 13:06:01.111: INFO: Pod "pod-secrets-c0335c1d-4124-4440-b908-7be17665962a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.068945187s STEP: Saw pod success Feb 4 13:06:01.111: INFO: Pod "pod-secrets-c0335c1d-4124-4440-b908-7be17665962a" satisfied condition "success or failure" Feb 4 13:06:01.114: INFO: Trying to get logs from node iruya-node pod pod-secrets-c0335c1d-4124-4440-b908-7be17665962a container secret-volume-test: STEP: delete the pod Feb 4 13:06:01.263: INFO: Waiting for pod pod-secrets-c0335c1d-4124-4440-b908-7be17665962a to disappear Feb 4 13:06:01.288: INFO: Pod pod-secrets-c0335c1d-4124-4440-b908-7be17665962a no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 4 13:06:01.288: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7427" for this suite. Feb 4 13:06:07.318: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 13:06:07.480: INFO: namespace secrets-7427 deletion completed in 6.184722702s • [SLOW TEST:16.514 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 4 13:06:07.480: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on node default medium Feb 4 13:06:07.625: INFO: Waiting up to 5m0s for pod "pod-9305e612-34b7-4bff-bf05-4ecedd15f504" in namespace "emptydir-5047" to be "success or failure" Feb 4 13:06:07.636: INFO: Pod "pod-9305e612-34b7-4bff-bf05-4ecedd15f504": Phase="Pending", Reason="", readiness=false. Elapsed: 11.049923ms Feb 4 13:06:09.645: INFO: Pod "pod-9305e612-34b7-4bff-bf05-4ecedd15f504": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019490697s Feb 4 13:06:11.656: INFO: Pod "pod-9305e612-34b7-4bff-bf05-4ecedd15f504": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031305296s Feb 4 13:06:13.667: INFO: Pod "pod-9305e612-34b7-4bff-bf05-4ecedd15f504": Phase="Pending", Reason="", readiness=false. Elapsed: 6.042212762s Feb 4 13:06:15.676: INFO: Pod "pod-9305e612-34b7-4bff-bf05-4ecedd15f504": Phase="Pending", Reason="", readiness=false. Elapsed: 8.050708821s Feb 4 13:06:17.685: INFO: Pod "pod-9305e612-34b7-4bff-bf05-4ecedd15f504": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.059886795s STEP: Saw pod success Feb 4 13:06:17.685: INFO: Pod "pod-9305e612-34b7-4bff-bf05-4ecedd15f504" satisfied condition "success or failure" Feb 4 13:06:17.690: INFO: Trying to get logs from node iruya-node pod pod-9305e612-34b7-4bff-bf05-4ecedd15f504 container test-container: STEP: delete the pod Feb 4 13:06:17.818: INFO: Waiting for pod pod-9305e612-34b7-4bff-bf05-4ecedd15f504 to disappear Feb 4 13:06:17.831: INFO: Pod pod-9305e612-34b7-4bff-bf05-4ecedd15f504 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 4 13:06:17.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5047" for this suite. Feb 4 13:06:23.963: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 13:06:24.168: INFO: namespace emptydir-5047 deletion completed in 6.329374415s • [SLOW TEST:16.688 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 4 13:06:24.168: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token Feb 4 13:06:24.933: INFO: created pod pod-service-account-defaultsa Feb 4 13:06:24.933: INFO: pod pod-service-account-defaultsa service account token volume mount: true Feb 4 13:06:24.984: INFO: created pod pod-service-account-mountsa Feb 4 13:06:24.984: INFO: pod pod-service-account-mountsa service account token volume mount: true Feb 4 13:06:25.009: INFO: created pod pod-service-account-nomountsa Feb 4 13:06:25.009: INFO: pod pod-service-account-nomountsa service account token volume mount: false Feb 4 13:06:25.096: INFO: created pod pod-service-account-defaultsa-mountspec Feb 4 13:06:25.096: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Feb 4 13:06:25.120: INFO: created pod pod-service-account-mountsa-mountspec Feb 4 13:06:25.120: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Feb 4 13:06:25.139: INFO: created pod pod-service-account-nomountsa-mountspec Feb 4 13:06:25.139: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Feb 4 13:06:25.153: INFO: created pod pod-service-account-defaultsa-nomountspec Feb 4 13:06:25.153: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Feb 4 13:06:25.186: INFO: created pod pod-service-account-mountsa-nomountspec Feb 4 13:06:25.186: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Feb 4 13:06:25.338: INFO: created pod pod-service-account-nomountsa-nomountspec Feb 4 13:06:25.338: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 4 13:06:25.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-1383" for this suite. Feb 4 13:07:29.508: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 13:07:29.652: INFO: namespace svcaccounts-1383 deletion completed in 1m4.169186079s • [SLOW TEST:65.484 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 4 13:07:29.653: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-d4d40284-1994-4cc6-94a6-49f1273c41f2 STEP: Creating a pod to test consume configMaps Feb 4 13:07:29.773: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-2d8c124a-1f32-4723-99ce-ec2882e95b36" in namespace "projected-3120" to be "success or failure" Feb 4 13:07:29.795: INFO: Pod "pod-projected-configmaps-2d8c124a-1f32-4723-99ce-ec2882e95b36": Phase="Pending", Reason="", readiness=false. Elapsed: 21.658054ms Feb 4 13:07:31.804: INFO: Pod "pod-projected-configmaps-2d8c124a-1f32-4723-99ce-ec2882e95b36": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03021199s Feb 4 13:07:33.815: INFO: Pod "pod-projected-configmaps-2d8c124a-1f32-4723-99ce-ec2882e95b36": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041143305s Feb 4 13:07:35.831: INFO: Pod "pod-projected-configmaps-2d8c124a-1f32-4723-99ce-ec2882e95b36": Phase="Pending", Reason="", readiness=false. Elapsed: 6.057762433s Feb 4 13:07:37.848: INFO: Pod "pod-projected-configmaps-2d8c124a-1f32-4723-99ce-ec2882e95b36": Phase="Pending", Reason="", readiness=false. Elapsed: 8.074232713s Feb 4 13:07:39.864: INFO: Pod "pod-projected-configmaps-2d8c124a-1f32-4723-99ce-ec2882e95b36": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.090501794s STEP: Saw pod success Feb 4 13:07:39.864: INFO: Pod "pod-projected-configmaps-2d8c124a-1f32-4723-99ce-ec2882e95b36" satisfied condition "success or failure" Feb 4 13:07:39.873: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-2d8c124a-1f32-4723-99ce-ec2882e95b36 container projected-configmap-volume-test: STEP: delete the pod Feb 4 13:07:39.951: INFO: Waiting for pod pod-projected-configmaps-2d8c124a-1f32-4723-99ce-ec2882e95b36 to disappear Feb 4 13:07:39.959: INFO: Pod pod-projected-configmaps-2d8c124a-1f32-4723-99ce-ec2882e95b36 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 4 13:07:39.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3120" for this suite. Feb 4 13:07:46.062: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 13:07:46.180: INFO: namespace projected-3120 deletion completed in 6.158956728s • [SLOW TEST:16.528 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 4 13:07:46.181: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1721 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Feb 4 13:07:46.285: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=kubectl-4101' Feb 4 13:07:48.375: INFO: stderr: "" Feb 4 13:07:48.375: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod is running STEP: verifying the pod e2e-test-nginx-pod was created Feb 4 13:07:58.428: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=kubectl-4101 -o json' Feb 4 13:07:58.571: INFO: stderr: "" Feb 4 13:07:58.571: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-02-04T13:07:48Z\",\n \"labels\": {\n \"run\": \"e2e-test-nginx-pod\"\n },\n \"name\": \"e2e-test-nginx-pod\",\n \"namespace\": \"kubectl-4101\",\n \"resourceVersion\": \"23064658\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-4101/pods/e2e-test-nginx-pod\",\n \"uid\": \"9566b767-87c1-4d61-9a06-b7cb762f6618\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-nginx-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-rnqbv\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"iruya-node\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-rnqbv\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-rnqbv\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-02-04T13:07:48Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-02-04T13:07:56Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-02-04T13:07:56Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-02-04T13:07:48Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"docker://01fa0e4df4b3a8728fe191f0b4af879cfd055d1c0572e3aeb7a77d08848bc7d2\",\n \"image\": \"nginx:1.14-alpine\",\n \"imageID\": \"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n \"lastState\": {},\n \"name\": \"e2e-test-nginx-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-02-04T13:07:55Z\"\n }\n }\n }\n ],\n \"hostIP\": \"10.96.3.65\",\n \"phase\": \"Running\",\n \"podIP\": \"10.44.0.1\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-02-04T13:07:48Z\"\n }\n}\n" STEP: replace the image in the pod Feb 4 13:07:58.572: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-4101' Feb 4 13:07:58.944: INFO: stderr: "" Feb 4 13:07:58.944: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n" STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29 [AfterEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1726 Feb 4 13:07:58.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-4101' Feb 4 13:08:06.520: INFO: stderr: "" Feb 4 13:08:06.520: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 4 13:08:06.520: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4101" for this suite. Feb 4 13:08:12.560: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 13:08:12.671: INFO: namespace kubectl-4101 deletion completed in 6.142646172s • [SLOW TEST:26.490 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 4 13:08:12.671: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on node default medium Feb 4 13:08:12.772: INFO: Waiting up to 5m0s for pod "pod-139dfc8f-3f6c-48ab-9c57-1eddc34ddee7" in namespace "emptydir-4773" to be "success or failure" Feb 4 13:08:12.802: INFO: Pod "pod-139dfc8f-3f6c-48ab-9c57-1eddc34ddee7": Phase="Pending", Reason="", readiness=false. Elapsed: 30.122561ms Feb 4 13:08:14.810: INFO: Pod "pod-139dfc8f-3f6c-48ab-9c57-1eddc34ddee7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038190189s Feb 4 13:08:16.820: INFO: Pod "pod-139dfc8f-3f6c-48ab-9c57-1eddc34ddee7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.0481816s Feb 4 13:08:18.831: INFO: Pod "pod-139dfc8f-3f6c-48ab-9c57-1eddc34ddee7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.059375811s Feb 4 13:08:20.858: INFO: Pod "pod-139dfc8f-3f6c-48ab-9c57-1eddc34ddee7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.08564174s Feb 4 13:08:22.875: INFO: Pod "pod-139dfc8f-3f6c-48ab-9c57-1eddc34ddee7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.102941022s STEP: Saw pod success Feb 4 13:08:22.875: INFO: Pod "pod-139dfc8f-3f6c-48ab-9c57-1eddc34ddee7" satisfied condition "success or failure" Feb 4 13:08:22.880: INFO: Trying to get logs from node iruya-node pod pod-139dfc8f-3f6c-48ab-9c57-1eddc34ddee7 container test-container: STEP: delete the pod Feb 4 13:08:22.978: INFO: Waiting for pod pod-139dfc8f-3f6c-48ab-9c57-1eddc34ddee7 to disappear Feb 4 13:08:23.070: INFO: Pod pod-139dfc8f-3f6c-48ab-9c57-1eddc34ddee7 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 4 13:08:23.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4773" for this suite. Feb 4 13:08:29.107: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 13:08:29.214: INFO: namespace emptydir-4773 deletion completed in 6.132382542s • [SLOW TEST:16.543 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 4 13:08:29.215: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Feb 4 13:08:29.389: INFO: Creating deployment "nginx-deployment" Feb 4 13:08:29.414: INFO: Waiting for observed generation 1 Feb 4 13:08:33.401: INFO: Waiting for all required pods to come up Feb 4 13:08:33.489: INFO: Pod name nginx: Found 10 pods out of 10 STEP: ensuring each pod is running Feb 4 13:08:58.241: INFO: Waiting for deployment "nginx-deployment" to complete Feb 4 13:08:58.252: INFO: Updating deployment "nginx-deployment" with a non-existent image Feb 4 13:08:58.290: INFO: Updating deployment nginx-deployment Feb 4 13:08:58.290: INFO: Waiting for observed generation 2 Feb 4 13:09:01.390: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Feb 4 13:09:01.431: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Feb 4 13:09:01.898: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Feb 4 13:09:01.974: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Feb 4 13:09:01.974: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Feb 4 13:09:01.979: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Feb 4 13:09:01.990: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas Feb 4 13:09:01.990: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30 Feb 4 13:09:02.082: INFO: Updating deployment nginx-deployment Feb 4 13:09:02.082: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas Feb 4 13:09:03.215: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Feb 4 13:09:07.878: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Feb 4 13:09:12.613: INFO: Deployment "nginx-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:deployment-1168,SelfLink:/apis/apps/v1/namespaces/deployment-1168/deployments/nginx-deployment,UID:f03290d9-9e3f-49d0-aee2-0a28ef5867aa,ResourceVersion:23064962,Generation:3,CreationTimestamp:2020-02-04 13:08:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Progressing True 2020-02-04 13:08:58 +0000 UTC 2020-02-04 13:08:29 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-55fb7cb77f" is progressing.} {Available False 2020-02-04 13:09:03 +0000 UTC 2020-02-04 13:09:03 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},} Feb 4 13:09:14.939: INFO: New ReplicaSet "nginx-deployment-55fb7cb77f" of Deployment "nginx-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f,GenerateName:,Namespace:deployment-1168,SelfLink:/apis/apps/v1/namespaces/deployment-1168/replicasets/nginx-deployment-55fb7cb77f,UID:c3705200-0c2d-4a10-aa72-369df021034c,ResourceVersion:23065022,Generation:3,CreationTimestamp:2020-02-04 13:08:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment f03290d9-9e3f-49d0-aee2-0a28ef5867aa 0xc0021da207 0xc0021da208}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Feb 4 13:09:14.939: INFO: All old ReplicaSets of Deployment "nginx-deployment": Feb 4 13:09:14.939: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498,GenerateName:,Namespace:deployment-1168,SelfLink:/apis/apps/v1/namespaces/deployment-1168/replicasets/nginx-deployment-7b8c6f4498,UID:a047dcf8-5ecd-4652-b5d6-c2bc9726badc,ResourceVersion:23065021,Generation:3,CreationTimestamp:2020-02-04 13:08:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment f03290d9-9e3f-49d0-aee2-0a28ef5867aa 0xc0021da2d7 0xc0021da2d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},} Feb 4 13:09:15.034: INFO: Pod "nginx-deployment-55fb7cb77f-4fxq4" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-4fxq4,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-1168,SelfLink:/api/v1/namespaces/deployment-1168/pods/nginx-deployment-55fb7cb77f-4fxq4,UID:585b4b34-b4e9-4433-8087-12b9d02478a1,ResourceVersion:23064922,Generation:0,CreationTimestamp:2020-02-04 13:08:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f c3705200-0c2d-4a10-aa72-369df021034c 0xc0021dac47 0xc0021dac48}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-4bqfk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4bqfk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-4bqfk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0021dacc0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0021dace0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 13:08:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-04 13:08:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-04 13:08:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 13:08:58 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-02-04 13:08:58 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 4 13:09:15.034: INFO: Pod "nginx-deployment-55fb7cb77f-646bc" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-646bc,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-1168,SelfLink:/api/v1/namespaces/deployment-1168/pods/nginx-deployment-55fb7cb77f-646bc,UID:84798618-e0a6-4dc4-84c7-6dd09c8736c3,ResourceVersion:23064952,Generation:0,CreationTimestamp:2020-02-04 13:08:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f c3705200-0c2d-4a10-aa72-369df021034c 0xc0021dadb7 0xc0021dadb8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-4bqfk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4bqfk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-4bqfk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0021dae30} {node.kubernetes.io/unreachable Exists NoExecute 0xc0021dae50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 13:09:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-04 13:09:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-04 13:09:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 13:08:58 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-02-04 13:09:01 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 4 13:09:15.034: INFO: Pod "nginx-deployment-55fb7cb77f-6kk4b" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-6kk4b,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-1168,SelfLink:/api/v1/namespaces/deployment-1168/pods/nginx-deployment-55fb7cb77f-6kk4b,UID:2da7f38d-96cb-497b-8076-d77451c34d7b,ResourceVersion:23064984,Generation:0,CreationTimestamp:2020-02-04 13:09:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f c3705200-0c2d-4a10-aa72-369df021034c 0xc0021daf27 0xc0021daf28}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-4bqfk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4bqfk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-4bqfk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0021daf90} {node.kubernetes.io/unreachable Exists NoExecute 0xc0021dafb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 13:09:04 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 4 13:09:15.035: INFO: Pod "nginx-deployment-55fb7cb77f-792h8" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-792h8,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-1168,SelfLink:/api/v1/namespaces/deployment-1168/pods/nginx-deployment-55fb7cb77f-792h8,UID:d8b27fbd-0861-4ebb-b296-c2602d18210c,ResourceVersion:23064988,Generation:0,CreationTimestamp:2020-02-04 13:09:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f c3705200-0c2d-4a10-aa72-369df021034c 0xc0021db037 0xc0021db038}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-4bqfk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4bqfk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-4bqfk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0021db0b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0021db0d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 13:09:04 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 4 13:09:15.035: INFO: Pod "nginx-deployment-55fb7cb77f-dpwjh" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-dpwjh,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-1168,SelfLink:/api/v1/namespaces/deployment-1168/pods/nginx-deployment-55fb7cb77f-dpwjh,UID:ec8e535d-fb9c-4eb3-9ded-3afeed91c1dc,ResourceVersion:23064985,Generation:0,CreationTimestamp:2020-02-04 13:09:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f c3705200-0c2d-4a10-aa72-369df021034c 0xc0021db157 0xc0021db158}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-4bqfk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4bqfk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-4bqfk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0021db1d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0021db1f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 13:09:04 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 4 13:09:15.035: INFO: Pod "nginx-deployment-55fb7cb77f-jdfzq" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-jdfzq,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-1168,SelfLink:/api/v1/namespaces/deployment-1168/pods/nginx-deployment-55fb7cb77f-jdfzq,UID:fcd83cde-20e6-4363-aeb4-708038026379,ResourceVersion:23064982,Generation:0,CreationTimestamp:2020-02-04 13:09:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f c3705200-0c2d-4a10-aa72-369df021034c 0xc0021db277 0xc0021db278}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-4bqfk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4bqfk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-4bqfk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0021db2e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0021db300}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 13:09:04 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 4 13:09:15.035: INFO: Pod "nginx-deployment-55fb7cb77f-khsb6" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-khsb6,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-1168,SelfLink:/api/v1/namespaces/deployment-1168/pods/nginx-deployment-55fb7cb77f-khsb6,UID:3c5cb12f-3984-426c-85f6-67640de0e584,ResourceVersion:23065007,Generation:0,CreationTimestamp:2020-02-04 13:09:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f c3705200-0c2d-4a10-aa72-369df021034c 0xc0021db387 0xc0021db388}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-4bqfk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4bqfk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-4bqfk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0021db3f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0021db410}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 13:09:04 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 4 13:09:15.035: INFO: Pod "nginx-deployment-55fb7cb77f-m5hd7" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-m5hd7,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-1168,SelfLink:/api/v1/namespaces/deployment-1168/pods/nginx-deployment-55fb7cb77f-m5hd7,UID:4f19b862-a58a-4a65-a9a6-ace546b650a3,ResourceVersion:23064991,Generation:0,CreationTimestamp:2020-02-04 13:09:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f c3705200-0c2d-4a10-aa72-369df021034c 0xc0021db4a7 0xc0021db4a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-4bqfk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4bqfk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-4bqfk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0021db520} {node.kubernetes.io/unreachable Exists NoExecute 0xc0021db540}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 13:09:04 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 4 13:09:15.035: INFO: Pod "nginx-deployment-55fb7cb77f-v6qcd" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-v6qcd,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-1168,SelfLink:/api/v1/namespaces/deployment-1168/pods/nginx-deployment-55fb7cb77f-v6qcd,UID:59e4785e-bc92-4731-8185-af7c236d0032,ResourceVersion:23064949,Generation:0,CreationTimestamp:2020-02-04 13:08:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f c3705200-0c2d-4a10-aa72-369df021034c 0xc0021db5c7 0xc0021db5c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-4bqfk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4bqfk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-4bqfk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0021db640} {node.kubernetes.io/unreachable Exists NoExecute 0xc0021db660}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 13:08:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-04 13:08:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-04 13:08:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 13:08:58 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-02-04 13:08:59 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 4 13:09:15.035: INFO: Pod "nginx-deployment-55fb7cb77f-vcc5r" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-vcc5r,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-1168,SelfLink:/api/v1/namespaces/deployment-1168/pods/nginx-deployment-55fb7cb77f-vcc5r,UID:1ce176c8-b1c1-4759-86f5-0cbf25ee2fc9,ResourceVersion:23064945,Generation:0,CreationTimestamp:2020-02-04 13:08:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f c3705200-0c2d-4a10-aa72-369df021034c 0xc0021db737 0xc0021db738}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-4bqfk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4bqfk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-4bqfk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0021db7a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0021db7c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 13:08:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-04 13:08:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-04 13:08:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 13:08:58 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-02-04 13:08:58 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 4 13:09:15.036: INFO: Pod "nginx-deployment-55fb7cb77f-wl64j" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-wl64j,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-1168,SelfLink:/api/v1/namespaces/deployment-1168/pods/nginx-deployment-55fb7cb77f-wl64j,UID:7e9e167a-a413-4840-a0fb-94d7b32d341d,ResourceVersion:23065015,Generation:0,CreationTimestamp:2020-02-04 13:09:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f c3705200-0c2d-4a10-aa72-369df021034c 0xc0021db897 0xc0021db898}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-4bqfk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4bqfk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-4bqfk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0021db910} {node.kubernetes.io/unreachable Exists NoExecute 0xc0021db930}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 13:09:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-04 13:09:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-04 13:09:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 13:09:04 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-02-04 13:09:04 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 4 13:09:15.036: INFO: Pod "nginx-deployment-55fb7cb77f-xj9q4" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-xj9q4,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-1168,SelfLink:/api/v1/namespaces/deployment-1168/pods/nginx-deployment-55fb7cb77f-xj9q4,UID:3a33d081-c0f8-459b-80a1-6358b3603848,ResourceVersion:23065020,Generation:0,CreationTimestamp:2020-02-04 13:09:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f c3705200-0c2d-4a10-aa72-369df021034c 0xc0021dba07 0xc0021dba08}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-4bqfk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4bqfk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-4bqfk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0021dba70} {node.kubernetes.io/unreachable Exists NoExecute 0xc0021dba90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 13:09:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-04 13:09:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-04 13:09:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 13:09:03 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-02-04 13:09:05 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 4 13:09:15.036: INFO: Pod "nginx-deployment-55fb7cb77f-z4lv5" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-z4lv5,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-1168,SelfLink:/api/v1/namespaces/deployment-1168/pods/nginx-deployment-55fb7cb77f-z4lv5,UID:c2ffa4d3-614c-450b-a529-3cab6a94e7dd,ResourceVersion:23064920,Generation:0,CreationTimestamp:2020-02-04 13:08:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f c3705200-0c2d-4a10-aa72-369df021034c 0xc0021dbb67 0xc0021dbb68}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-4bqfk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4bqfk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-4bqfk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0021dbbd0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0021dbbf0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 13:08:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-04 13:08:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-04 13:08:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 13:08:58 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-02-04 13:08:58 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 4 13:09:15.036: INFO: Pod "nginx-deployment-7b8c6f4498-56xj4" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-56xj4,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1168,SelfLink:/api/v1/namespaces/deployment-1168/pods/nginx-deployment-7b8c6f4498-56xj4,UID:1e2c1391-6c2f-408a-857f-9313e7711f93,ResourceVersion:23065004,Generation:0,CreationTimestamp:2020-02-04 13:09:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 a047dcf8-5ecd-4652-b5d6-c2bc9726badc 0xc0021dbcc7 0xc0021dbcc8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-4bqfk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4bqfk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-4bqfk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0021dbd30} {node.kubernetes.io/unreachable Exists NoExecute 0xc0021dbd50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 13:09:04 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 4 13:09:15.036: INFO: Pod "nginx-deployment-7b8c6f4498-8f9xm" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-8f9xm,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1168,SelfLink:/api/v1/namespaces/deployment-1168/pods/nginx-deployment-7b8c6f4498-8f9xm,UID:235037f4-89f2-496a-92b3-4022bcd05bdb,ResourceVersion:23065006,Generation:0,CreationTimestamp:2020-02-04 13:09:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 a047dcf8-5ecd-4652-b5d6-c2bc9726badc 0xc0021dbdd7 0xc0021dbdd8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-4bqfk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4bqfk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-4bqfk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0021dbe50} {node.kubernetes.io/unreachable Exists NoExecute 0xc0021dbe70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 13:09:04 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 4 13:09:15.036: INFO: Pod "nginx-deployment-7b8c6f4498-8jnj8" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-8jnj8,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1168,SelfLink:/api/v1/namespaces/deployment-1168/pods/nginx-deployment-7b8c6f4498-8jnj8,UID:efdd132f-4e76-4a74-875f-3fa083dc38d9,ResourceVersion:23064863,Generation:0,CreationTimestamp:2020-02-04 13:08:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 a047dcf8-5ecd-4652-b5d6-c2bc9726badc 0xc0021dbef7 0xc0021dbef8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-4bqfk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4bqfk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-4bqfk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0021dbf70} {node.kubernetes.io/unreachable Exists NoExecute 0xc0021dbf90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 13:08:29 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 13:08:54 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 13:08:54 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 13:08:29 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.3,StartTime:2020-02-04 13:08:29 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-04 13:08:54 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://36db3332c96c299526cc1aa50c32a7135a31801d17e1d11a9d9fc3005e584f24}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 4 13:09:15.036: INFO: Pod "nginx-deployment-7b8c6f4498-chfmn" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-chfmn,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1168,SelfLink:/api/v1/namespaces/deployment-1168/pods/nginx-deployment-7b8c6f4498-chfmn,UID:ad89052c-a54c-44dd-b629-09aa37c2b44c,ResourceVersion:23065029,Generation:0,CreationTimestamp:2020-02-04 13:09:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 a047dcf8-5ecd-4652-b5d6-c2bc9726badc 0xc001bb6067 0xc001bb6068}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-4bqfk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4bqfk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-4bqfk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001bb60d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001bb60f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 13:09:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-04 13:09:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-04 13:09:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 13:09:04 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-02-04 13:09:07 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 4 13:09:15.036: INFO: Pod "nginx-deployment-7b8c6f4498-ddlx4" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-ddlx4,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1168,SelfLink:/api/v1/namespaces/deployment-1168/pods/nginx-deployment-7b8c6f4498-ddlx4,UID:96181d15-6394-4bc3-a598-0262cc5b6691,ResourceVersion:23064983,Generation:0,CreationTimestamp:2020-02-04 13:09:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 a047dcf8-5ecd-4652-b5d6-c2bc9726badc 0xc001bb61e7 0xc001bb61e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-4bqfk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4bqfk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-4bqfk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001bb6260} {node.kubernetes.io/unreachable Exists NoExecute 0xc001bb6280}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 13:09:04 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 4 13:09:15.037: INFO: Pod "nginx-deployment-7b8c6f4498-ffcx9" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-ffcx9,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1168,SelfLink:/api/v1/namespaces/deployment-1168/pods/nginx-deployment-7b8c6f4498-ffcx9,UID:7116e037-5b08-4523-a589-5dd15244ed65,ResourceVersion:23065025,Generation:0,CreationTimestamp:2020-02-04 13:09:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 a047dcf8-5ecd-4652-b5d6-c2bc9726badc 0xc001bb6307 0xc001bb6308}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-4bqfk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4bqfk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-4bqfk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001bb6390} {node.kubernetes.io/unreachable Exists NoExecute 0xc001bb63b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 13:09:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-04 13:09:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-04 13:09:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 13:09:04 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-02-04 13:09:04 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 4 13:09:15.037: INFO: Pod "nginx-deployment-7b8c6f4498-fz2lb" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-fz2lb,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1168,SelfLink:/api/v1/namespaces/deployment-1168/pods/nginx-deployment-7b8c6f4498-fz2lb,UID:67c4572a-5757-4a49-959e-a38da731cb0c,ResourceVersion:23064894,Generation:0,CreationTimestamp:2020-02-04 13:08:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 a047dcf8-5ecd-4652-b5d6-c2bc9726badc 0xc001bb6477 0xc001bb6478}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-4bqfk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4bqfk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-4bqfk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001bb64e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001bb6500}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 13:08:29 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 13:08:57 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 13:08:57 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 13:08:29 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.6,StartTime:2020-02-04 13:08:29 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-04 13:08:56 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://60f00779bf019b7c4b277c567aeb48b70f1fe13769ba993b7e494f08cfdb5f60}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 4 13:09:15.037: INFO: Pod "nginx-deployment-7b8c6f4498-gv2f2" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-gv2f2,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1168,SelfLink:/api/v1/namespaces/deployment-1168/pods/nginx-deployment-7b8c6f4498-gv2f2,UID:3d14ee5a-c228-42b1-bb7d-9c4396aa07a5,ResourceVersion:23065011,Generation:0,CreationTimestamp:2020-02-04 13:09:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 a047dcf8-5ecd-4652-b5d6-c2bc9726badc 0xc001bb65d7 0xc001bb65d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-4bqfk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4bqfk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-4bqfk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001bb6650} {node.kubernetes.io/unreachable Exists NoExecute 0xc001bb6670}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 13:09:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-04 13:09:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-04 13:09:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 13:09:03 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-02-04 13:09:04 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 4 13:09:15.037: INFO: Pod "nginx-deployment-7b8c6f4498-hljv5" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-hljv5,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1168,SelfLink:/api/v1/namespaces/deployment-1168/pods/nginx-deployment-7b8c6f4498-hljv5,UID:3936f932-0365-4b0f-acf4-831e6e74b438,ResourceVersion:23065005,Generation:0,CreationTimestamp:2020-02-04 13:09:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 a047dcf8-5ecd-4652-b5d6-c2bc9726badc 0xc001bb6737 0xc001bb6738}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-4bqfk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4bqfk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-4bqfk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001bb67b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001bb67d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 13:09:04 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 4 13:09:15.037: INFO: Pod "nginx-deployment-7b8c6f4498-jxtft" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-jxtft,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1168,SelfLink:/api/v1/namespaces/deployment-1168/pods/nginx-deployment-7b8c6f4498-jxtft,UID:db89d49e-5e10-4716-a47f-d2f17c363963,ResourceVersion:23064891,Generation:0,CreationTimestamp:2020-02-04 13:08:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 a047dcf8-5ecd-4652-b5d6-c2bc9726badc 0xc001bb6857 0xc001bb6858}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-4bqfk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4bqfk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-4bqfk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001bb68c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001bb68e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 13:08:29 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 13:08:57 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 13:08:57 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 13:08:29 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.5,StartTime:2020-02-04 13:08:29 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-04 13:08:56 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://b32c91a929742b5bf12af3d4877baa9316df9ea1f6e3d2a273c2b4fd43b848bd}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 4 13:09:15.037: INFO: Pod "nginx-deployment-7b8c6f4498-k2nlv" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-k2nlv,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1168,SelfLink:/api/v1/namespaces/deployment-1168/pods/nginx-deployment-7b8c6f4498-k2nlv,UID:9a2fed0e-e7d6-469b-b353-1d5849228c5d,ResourceVersion:23064872,Generation:0,CreationTimestamp:2020-02-04 13:08:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 a047dcf8-5ecd-4652-b5d6-c2bc9726badc 0xc001bb69b7 0xc001bb69b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-4bqfk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4bqfk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-4bqfk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001bb6a40} {node.kubernetes.io/unreachable Exists NoExecute 0xc001bb6a60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 13:08:29 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 13:08:54 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 13:08:54 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 13:08:29 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2020-02-04 13:08:29 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-04 13:08:53 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://e880d85a67ab291354ab9ab66f11808654277412df89f9ae79900ade3437d417}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 4 13:09:15.037: INFO: Pod "nginx-deployment-7b8c6f4498-lkt8t" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-lkt8t,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1168,SelfLink:/api/v1/namespaces/deployment-1168/pods/nginx-deployment-7b8c6f4498-lkt8t,UID:da9133be-b403-4597-9750-923343d24415,ResourceVersion:23064849,Generation:0,CreationTimestamp:2020-02-04 13:08:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 a047dcf8-5ecd-4652-b5d6-c2bc9726badc 0xc001bb6b37 0xc001bb6b38}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-4bqfk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4bqfk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-4bqfk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001bb6bb0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001bb6bd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 13:08:29 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 13:08:54 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 13:08:54 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 13:08:29 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.5,StartTime:2020-02-04 13:08:29 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-04 13:08:54 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://74ded1842e0f32b3ed14e185fe4b034b2926c3b4155098dd5f7606994a26895a}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 4 13:09:15.037: INFO: Pod "nginx-deployment-7b8c6f4498-pgxw5" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-pgxw5,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1168,SelfLink:/api/v1/namespaces/deployment-1168/pods/nginx-deployment-7b8c6f4498-pgxw5,UID:553ed896-2f26-4cb5-a35c-f1c5a1650a2a,ResourceVersion:23064989,Generation:0,CreationTimestamp:2020-02-04 13:09:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 a047dcf8-5ecd-4652-b5d6-c2bc9726badc 0xc001bb6ca7 0xc001bb6ca8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-4bqfk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4bqfk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-4bqfk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001bb6d30} {node.kubernetes.io/unreachable Exists NoExecute 0xc001bb6d50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 13:09:04 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 4 13:09:15.038: INFO: Pod "nginx-deployment-7b8c6f4498-pp5xx" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-pp5xx,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1168,SelfLink:/api/v1/namespaces/deployment-1168/pods/nginx-deployment-7b8c6f4498-pp5xx,UID:9376ec03-0fba-45cc-b285-a62ea8044e1e,ResourceVersion:23064882,Generation:0,CreationTimestamp:2020-02-04 13:08:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 a047dcf8-5ecd-4652-b5d6-c2bc9726badc 0xc001bb6dd7 0xc001bb6dd8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-4bqfk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4bqfk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-4bqfk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001bb6e40} {node.kubernetes.io/unreachable Exists NoExecute 0xc001bb6e60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 13:08:29 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 13:08:57 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 13:08:57 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 13:08:29 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.7,StartTime:2020-02-04 13:08:29 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-04 13:08:56 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://ddaba16830ea36f228ac26453c51b4b322ec90159941310ed5f8c181275e3334}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 4 13:09:15.038: INFO: Pod "nginx-deployment-7b8c6f4498-q2gg4" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-q2gg4,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1168,SelfLink:/api/v1/namespaces/deployment-1168/pods/nginx-deployment-7b8c6f4498-q2gg4,UID:a9b0ecda-37c9-4a17-9056-ecc14ce31fb1,ResourceVersion:23064986,Generation:0,CreationTimestamp:2020-02-04 13:09:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 a047dcf8-5ecd-4652-b5d6-c2bc9726badc 0xc001bb6f47 0xc001bb6f48}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-4bqfk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4bqfk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-4bqfk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001bb6fb0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001bb6fd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 13:09:04 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 4 13:09:15.038: INFO: Pod "nginx-deployment-7b8c6f4498-q66pd" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-q66pd,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1168,SelfLink:/api/v1/namespaces/deployment-1168/pods/nginx-deployment-7b8c6f4498-q66pd,UID:e187791b-75a1-4990-9432-78627316b0da,ResourceVersion:23064868,Generation:0,CreationTimestamp:2020-02-04 13:08:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 a047dcf8-5ecd-4652-b5d6-c2bc9726badc 0xc001bb7067 0xc001bb7068}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-4bqfk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4bqfk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-4bqfk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001bb70e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001bb7100}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 13:08:29 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 13:08:54 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 13:08:54 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 13:08:29 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2020-02-04 13:08:29 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-04 13:08:53 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://bb1a19beea1ce254d6c74474508f0fbbe836510fe56364d96e8d28f84fc9e85e}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 4 13:09:15.038: INFO: Pod "nginx-deployment-7b8c6f4498-rfgjw" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-rfgjw,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1168,SelfLink:/api/v1/namespaces/deployment-1168/pods/nginx-deployment-7b8c6f4498-rfgjw,UID:12a44751-9791-452f-8c20-083866612cfd,ResourceVersion:23065003,Generation:0,CreationTimestamp:2020-02-04 13:09:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 a047dcf8-5ecd-4652-b5d6-c2bc9726badc 0xc001bb71d7 0xc001bb71d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-4bqfk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4bqfk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-4bqfk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001bb7240} {node.kubernetes.io/unreachable Exists NoExecute 0xc001bb7260}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 13:09:04 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 4 13:09:15.038: INFO: Pod "nginx-deployment-7b8c6f4498-w87ws" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-w87ws,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1168,SelfLink:/api/v1/namespaces/deployment-1168/pods/nginx-deployment-7b8c6f4498-w87ws,UID:e3e99a89-7c4f-454f-a39f-2e811d45cdc2,ResourceVersion:23064990,Generation:0,CreationTimestamp:2020-02-04 13:09:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 a047dcf8-5ecd-4652-b5d6-c2bc9726badc 0xc001bb72e7 0xc001bb72e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-4bqfk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4bqfk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-4bqfk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001bb7360} {node.kubernetes.io/unreachable Exists NoExecute 0xc001bb7380}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 13:09:04 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 4 13:09:15.038: INFO: Pod "nginx-deployment-7b8c6f4498-xfd5n" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-xfd5n,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1168,SelfLink:/api/v1/namespaces/deployment-1168/pods/nginx-deployment-7b8c6f4498-xfd5n,UID:e12e81ea-fbda-4ec8-853a-42905cd7fa0f,ResourceVersion:23064859,Generation:0,CreationTimestamp:2020-02-04 13:08:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 a047dcf8-5ecd-4652-b5d6-c2bc9726badc 0xc001bb7407 0xc001bb7408}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-4bqfk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4bqfk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-4bqfk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001bb7480} {node.kubernetes.io/unreachable Exists NoExecute 0xc001bb74a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 13:08:29 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 13:08:54 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 13:08:54 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 13:08:29 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.4,StartTime:2020-02-04 13:08:29 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-04 13:08:54 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://ee5079bb1968a9b27c4b94e6e66d0c6d6df8475817a1deb7515b5eace694ec5e}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 4 13:09:15.038: INFO: Pod "nginx-deployment-7b8c6f4498-xglg5" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-xglg5,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1168,SelfLink:/api/v1/namespaces/deployment-1168/pods/nginx-deployment-7b8c6f4498-xglg5,UID:b7deea68-1e5c-4888-aa07-b17bf9afaeb3,ResourceVersion:23065008,Generation:0,CreationTimestamp:2020-02-04 13:09:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 a047dcf8-5ecd-4652-b5d6-c2bc9726badc 0xc001bb7587 0xc001bb7588}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-4bqfk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4bqfk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-4bqfk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001bb75f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001bb7610}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 13:09:04 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 4 13:09:15.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-1168" for this suite. Feb 4 13:10:49.924: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 13:10:50.065: INFO: namespace deployment-1168 deletion completed in 1m32.963315213s • [SLOW TEST:140.851 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 4 13:10:50.067: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Feb 4 13:10:50.250: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Feb 4 13:10:50.381: INFO: Number of nodes with available pods: 0 Feb 4 13:10:50.381: INFO: Node iruya-node is running more than one daemon pod Feb 4 13:10:52.494: INFO: Number of nodes with available pods: 0 Feb 4 13:10:52.494: INFO: Node iruya-node is running more than one daemon pod Feb 4 13:10:53.395: INFO: Number of nodes with available pods: 0 Feb 4 13:10:53.395: INFO: Node iruya-node is running more than one daemon pod Feb 4 13:10:54.394: INFO: Number of nodes with available pods: 0 Feb 4 13:10:54.394: INFO: Node iruya-node is running more than one daemon pod Feb 4 13:10:55.398: INFO: Number of nodes with available pods: 0 Feb 4 13:10:55.398: INFO: Node iruya-node is running more than one daemon pod Feb 4 13:10:56.402: INFO: Number of nodes with available pods: 0 Feb 4 13:10:56.402: INFO: Node iruya-node is running more than one daemon pod Feb 4 13:10:58.089: INFO: Number of nodes with available pods: 0 Feb 4 13:10:58.089: INFO: Node iruya-node is running more than one daemon pod Feb 4 13:10:59.570: INFO: Number of nodes with available pods: 0 Feb 4 13:10:59.570: INFO: Node iruya-node is running more than one daemon pod Feb 4 13:11:00.444: INFO: Number of nodes with available pods: 0 Feb 4 13:11:00.444: INFO: Node iruya-node is running more than one daemon pod Feb 4 13:11:01.478: INFO: Number of nodes with available pods: 0 Feb 4 13:11:01.478: INFO: Node iruya-node is running more than one daemon pod Feb 4 13:11:02.464: INFO: Number of nodes with available pods: 2 Feb 4 13:11:02.464: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Feb 4 13:11:02.532: INFO: Wrong image for pod: daemon-set-rdvcb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 4 13:11:02.532: INFO: Wrong image for pod: daemon-set-vpszx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 4 13:11:03.553: INFO: Wrong image for pod: daemon-set-rdvcb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 4 13:11:03.553: INFO: Wrong image for pod: daemon-set-vpszx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 4 13:11:04.563: INFO: Wrong image for pod: daemon-set-rdvcb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 4 13:11:04.563: INFO: Wrong image for pod: daemon-set-vpszx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 4 13:11:05.559: INFO: Wrong image for pod: daemon-set-rdvcb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 4 13:11:05.559: INFO: Wrong image for pod: daemon-set-vpszx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 4 13:11:06.556: INFO: Wrong image for pod: daemon-set-rdvcb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 4 13:11:06.556: INFO: Wrong image for pod: daemon-set-vpszx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 4 13:11:07.552: INFO: Wrong image for pod: daemon-set-rdvcb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 4 13:11:07.552: INFO: Wrong image for pod: daemon-set-vpszx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 4 13:11:08.557: INFO: Wrong image for pod: daemon-set-rdvcb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 4 13:11:08.557: INFO: Wrong image for pod: daemon-set-vpszx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 4 13:11:09.554: INFO: Wrong image for pod: daemon-set-rdvcb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 4 13:11:09.554: INFO: Wrong image for pod: daemon-set-vpszx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 4 13:11:09.554: INFO: Pod daemon-set-vpszx is not available Feb 4 13:11:10.557: INFO: Wrong image for pod: daemon-set-rdvcb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 4 13:11:10.557: INFO: Wrong image for pod: daemon-set-vpszx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 4 13:11:10.557: INFO: Pod daemon-set-vpszx is not available Feb 4 13:11:11.551: INFO: Wrong image for pod: daemon-set-rdvcb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 4 13:11:11.551: INFO: Wrong image for pod: daemon-set-vpszx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 4 13:11:11.551: INFO: Pod daemon-set-vpszx is not available Feb 4 13:11:12.555: INFO: Wrong image for pod: daemon-set-rdvcb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 4 13:11:12.555: INFO: Wrong image for pod: daemon-set-vpszx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 4 13:11:12.555: INFO: Pod daemon-set-vpszx is not available Feb 4 13:11:13.550: INFO: Wrong image for pod: daemon-set-rdvcb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 4 13:11:13.550: INFO: Wrong image for pod: daemon-set-vpszx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 4 13:11:13.550: INFO: Pod daemon-set-vpszx is not available Feb 4 13:11:14.553: INFO: Wrong image for pod: daemon-set-rdvcb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 4 13:11:14.553: INFO: Wrong image for pod: daemon-set-vpszx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 4 13:11:14.553: INFO: Pod daemon-set-vpszx is not available Feb 4 13:11:15.548: INFO: Wrong image for pod: daemon-set-rdvcb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 4 13:11:15.548: INFO: Wrong image for pod: daemon-set-vpszx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 4 13:11:15.548: INFO: Pod daemon-set-vpszx is not available Feb 4 13:11:16.572: INFO: Wrong image for pod: daemon-set-rdvcb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 4 13:11:16.572: INFO: Wrong image for pod: daemon-set-vpszx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 4 13:11:16.572: INFO: Pod daemon-set-vpszx is not available Feb 4 13:11:17.557: INFO: Wrong image for pod: daemon-set-rdvcb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 4 13:11:17.557: INFO: Pod daemon-set-s7h8d is not available Feb 4 13:11:18.553: INFO: Wrong image for pod: daemon-set-rdvcb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 4 13:11:18.553: INFO: Pod daemon-set-s7h8d is not available Feb 4 13:11:19.548: INFO: Wrong image for pod: daemon-set-rdvcb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 4 13:11:19.549: INFO: Pod daemon-set-s7h8d is not available Feb 4 13:11:20.554: INFO: Wrong image for pod: daemon-set-rdvcb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 4 13:11:20.554: INFO: Pod daemon-set-s7h8d is not available Feb 4 13:11:21.555: INFO: Wrong image for pod: daemon-set-rdvcb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 4 13:11:21.555: INFO: Pod daemon-set-s7h8d is not available Feb 4 13:11:22.552: INFO: Wrong image for pod: daemon-set-rdvcb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 4 13:11:22.552: INFO: Pod daemon-set-s7h8d is not available Feb 4 13:11:23.551: INFO: Wrong image for pod: daemon-set-rdvcb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 4 13:11:23.551: INFO: Pod daemon-set-s7h8d is not available Feb 4 13:11:24.558: INFO: Wrong image for pod: daemon-set-rdvcb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 4 13:11:24.558: INFO: Pod daemon-set-s7h8d is not available Feb 4 13:11:25.580: INFO: Wrong image for pod: daemon-set-rdvcb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 4 13:11:25.581: INFO: Pod daemon-set-s7h8d is not available Feb 4 13:11:27.051: INFO: Wrong image for pod: daemon-set-rdvcb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 4 13:11:27.556: INFO: Wrong image for pod: daemon-set-rdvcb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 4 13:11:28.707: INFO: Wrong image for pod: daemon-set-rdvcb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 4 13:11:29.554: INFO: Wrong image for pod: daemon-set-rdvcb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 4 13:11:30.556: INFO: Wrong image for pod: daemon-set-rdvcb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 4 13:11:30.556: INFO: Pod daemon-set-rdvcb is not available Feb 4 13:11:31.555: INFO: Wrong image for pod: daemon-set-rdvcb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 4 13:11:31.556: INFO: Pod daemon-set-rdvcb is not available Feb 4 13:11:32.549: INFO: Wrong image for pod: daemon-set-rdvcb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 4 13:11:32.549: INFO: Pod daemon-set-rdvcb is not available Feb 4 13:11:33.552: INFO: Wrong image for pod: daemon-set-rdvcb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 4 13:11:33.553: INFO: Pod daemon-set-rdvcb is not available Feb 4 13:11:34.552: INFO: Wrong image for pod: daemon-set-rdvcb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 4 13:11:34.552: INFO: Pod daemon-set-rdvcb is not available Feb 4 13:11:35.549: INFO: Wrong image for pod: daemon-set-rdvcb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 4 13:11:35.549: INFO: Pod daemon-set-rdvcb is not available Feb 4 13:11:36.549: INFO: Wrong image for pod: daemon-set-rdvcb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 4 13:11:36.550: INFO: Pod daemon-set-rdvcb is not available Feb 4 13:11:37.549: INFO: Wrong image for pod: daemon-set-rdvcb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 4 13:11:37.549: INFO: Pod daemon-set-rdvcb is not available Feb 4 13:11:38.564: INFO: Pod daemon-set-hhq9j is not available STEP: Check that daemon pods are still running on every node of the cluster. Feb 4 13:11:38.585: INFO: Number of nodes with available pods: 1 Feb 4 13:11:38.586: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Feb 4 13:11:39.614: INFO: Number of nodes with available pods: 1 Feb 4 13:11:39.614: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Feb 4 13:11:40.924: INFO: Number of nodes with available pods: 1 Feb 4 13:11:40.924: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Feb 4 13:11:41.698: INFO: Number of nodes with available pods: 1 Feb 4 13:11:41.699: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Feb 4 13:11:42.626: INFO: Number of nodes with available pods: 1 Feb 4 13:11:42.627: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Feb 4 13:11:43.613: INFO: Number of nodes with available pods: 1 Feb 4 13:11:43.613: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Feb 4 13:11:44.607: INFO: Number of nodes with available pods: 1 Feb 4 13:11:44.607: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Feb 4 13:11:45.966: INFO: Number of nodes with available pods: 1 Feb 4 13:11:45.967: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Feb 4 13:11:46.602: INFO: Number of nodes with available pods: 1 Feb 4 13:11:46.602: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Feb 4 13:11:47.602: INFO: Number of nodes with available pods: 2 Feb 4 13:11:47.602: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-431, will wait for the garbage collector to delete the pods Feb 4 13:11:47.682: INFO: Deleting DaemonSet.extensions daemon-set took: 10.668115ms Feb 4 13:11:47.983: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.825739ms Feb 4 13:12:06.591: INFO: Number of nodes with available pods: 0 Feb 4 13:12:06.591: INFO: Number of running nodes: 0, number of available pods: 0 Feb 4 13:12:06.594: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-431/daemonsets","resourceVersion":"23065549"},"items":null} Feb 4 13:12:06.597: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-431/pods","resourceVersion":"23065549"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 4 13:12:06.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-431" for this suite. Feb 4 13:12:12.651: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 13:12:12.777: INFO: namespace daemonsets-431 deletion completed in 6.15849757s • [SLOW TEST:82.710 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 4 13:12:12.777: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9329.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-9329.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9329.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-9329.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-9329.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-9329.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-9329.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-9329.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-9329.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-9329.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-9329.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-9329.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9329.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 112.96.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.96.112_udp@PTR;check="$$(dig +tcp +noall +answer +search 112.96.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.96.112_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9329.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-9329.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9329.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-9329.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-9329.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-9329.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-9329.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-9329.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-9329.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-9329.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-9329.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-9329.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9329.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 112.96.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.96.112_udp@PTR;check="$$(dig +tcp +noall +answer +search 112.96.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.96.112_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Feb 4 13:12:31.160: INFO: Unable to read wheezy_udp@dns-test-service.dns-9329.svc.cluster.local from pod dns-9329/dns-test-21eba511-0ae3-466c-bb79-c234d9cf406b: the server could not find the requested resource (get pods dns-test-21eba511-0ae3-466c-bb79-c234d9cf406b) Feb 4 13:12:31.166: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9329.svc.cluster.local from pod dns-9329/dns-test-21eba511-0ae3-466c-bb79-c234d9cf406b: the server could not find the requested resource (get pods dns-test-21eba511-0ae3-466c-bb79-c234d9cf406b) Feb 4 13:12:31.173: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9329.svc.cluster.local from pod dns-9329/dns-test-21eba511-0ae3-466c-bb79-c234d9cf406b: the server could not find the requested resource (get pods dns-test-21eba511-0ae3-466c-bb79-c234d9cf406b) Feb 4 13:12:31.178: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9329.svc.cluster.local from pod dns-9329/dns-test-21eba511-0ae3-466c-bb79-c234d9cf406b: the server could not find the requested resource (get pods dns-test-21eba511-0ae3-466c-bb79-c234d9cf406b) Feb 4 13:12:31.183: INFO: Unable to read wheezy_udp@_http._tcp.test-service-2.dns-9329.svc.cluster.local from pod dns-9329/dns-test-21eba511-0ae3-466c-bb79-c234d9cf406b: the server could not find the requested resource (get pods dns-test-21eba511-0ae3-466c-bb79-c234d9cf406b) Feb 4 13:12:31.190: INFO: Unable to read wheezy_tcp@_http._tcp.test-service-2.dns-9329.svc.cluster.local from pod dns-9329/dns-test-21eba511-0ae3-466c-bb79-c234d9cf406b: the server could not find the requested resource (get pods dns-test-21eba511-0ae3-466c-bb79-c234d9cf406b) Feb 4 13:12:31.194: INFO: Unable to read wheezy_udp@PodARecord from pod dns-9329/dns-test-21eba511-0ae3-466c-bb79-c234d9cf406b: the server could not find the requested resource (get pods dns-test-21eba511-0ae3-466c-bb79-c234d9cf406b) Feb 4 13:12:31.201: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-9329/dns-test-21eba511-0ae3-466c-bb79-c234d9cf406b: the server could not find the requested resource (get pods dns-test-21eba511-0ae3-466c-bb79-c234d9cf406b) Feb 4 13:12:31.210: INFO: Unable to read 10.106.96.112_udp@PTR from pod dns-9329/dns-test-21eba511-0ae3-466c-bb79-c234d9cf406b: the server could not find the requested resource (get pods dns-test-21eba511-0ae3-466c-bb79-c234d9cf406b) Feb 4 13:12:31.217: INFO: Unable to read 10.106.96.112_tcp@PTR from pod dns-9329/dns-test-21eba511-0ae3-466c-bb79-c234d9cf406b: the server could not find the requested resource (get pods dns-test-21eba511-0ae3-466c-bb79-c234d9cf406b) Feb 4 13:12:31.221: INFO: Unable to read jessie_udp@dns-test-service.dns-9329.svc.cluster.local from pod dns-9329/dns-test-21eba511-0ae3-466c-bb79-c234d9cf406b: the server could not find the requested resource (get pods dns-test-21eba511-0ae3-466c-bb79-c234d9cf406b) Feb 4 13:12:31.225: INFO: Unable to read jessie_tcp@dns-test-service.dns-9329.svc.cluster.local from pod dns-9329/dns-test-21eba511-0ae3-466c-bb79-c234d9cf406b: the server could not find the requested resource (get pods dns-test-21eba511-0ae3-466c-bb79-c234d9cf406b) Feb 4 13:12:31.230: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9329.svc.cluster.local from pod dns-9329/dns-test-21eba511-0ae3-466c-bb79-c234d9cf406b: the server could not find the requested resource (get pods dns-test-21eba511-0ae3-466c-bb79-c234d9cf406b) Feb 4 13:12:31.236: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9329.svc.cluster.local from pod dns-9329/dns-test-21eba511-0ae3-466c-bb79-c234d9cf406b: the server could not find the requested resource (get pods dns-test-21eba511-0ae3-466c-bb79-c234d9cf406b) Feb 4 13:12:31.242: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.dns-9329.svc.cluster.local from pod dns-9329/dns-test-21eba511-0ae3-466c-bb79-c234d9cf406b: the server could not find the requested resource (get pods dns-test-21eba511-0ae3-466c-bb79-c234d9cf406b) Feb 4 13:12:31.245: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.dns-9329.svc.cluster.local from pod dns-9329/dns-test-21eba511-0ae3-466c-bb79-c234d9cf406b: the server could not find the requested resource (get pods dns-test-21eba511-0ae3-466c-bb79-c234d9cf406b) Feb 4 13:12:31.249: INFO: Unable to read jessie_udp@PodARecord from pod dns-9329/dns-test-21eba511-0ae3-466c-bb79-c234d9cf406b: the server could not find the requested resource (get pods dns-test-21eba511-0ae3-466c-bb79-c234d9cf406b) Feb 4 13:12:31.252: INFO: Unable to read jessie_tcp@PodARecord from pod dns-9329/dns-test-21eba511-0ae3-466c-bb79-c234d9cf406b: the server could not find the requested resource (get pods dns-test-21eba511-0ae3-466c-bb79-c234d9cf406b) Feb 4 13:12:31.263: INFO: Unable to read 10.106.96.112_udp@PTR from pod dns-9329/dns-test-21eba511-0ae3-466c-bb79-c234d9cf406b: the server could not find the requested resource (get pods dns-test-21eba511-0ae3-466c-bb79-c234d9cf406b) Feb 4 13:12:31.270: INFO: Unable to read 10.106.96.112_tcp@PTR from pod dns-9329/dns-test-21eba511-0ae3-466c-bb79-c234d9cf406b: the server could not find the requested resource (get pods dns-test-21eba511-0ae3-466c-bb79-c234d9cf406b) Feb 4 13:12:31.270: INFO: Lookups using dns-9329/dns-test-21eba511-0ae3-466c-bb79-c234d9cf406b failed for: [wheezy_udp@dns-test-service.dns-9329.svc.cluster.local wheezy_tcp@dns-test-service.dns-9329.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9329.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9329.svc.cluster.local wheezy_udp@_http._tcp.test-service-2.dns-9329.svc.cluster.local wheezy_tcp@_http._tcp.test-service-2.dns-9329.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord 10.106.96.112_udp@PTR 10.106.96.112_tcp@PTR jessie_udp@dns-test-service.dns-9329.svc.cluster.local jessie_tcp@dns-test-service.dns-9329.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9329.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9329.svc.cluster.local jessie_udp@_http._tcp.test-service-2.dns-9329.svc.cluster.local jessie_tcp@_http._tcp.test-service-2.dns-9329.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord 10.106.96.112_udp@PTR 10.106.96.112_tcp@PTR] Feb 4 13:12:36.443: INFO: DNS probes using dns-9329/dns-test-21eba511-0ae3-466c-bb79-c234d9cf406b succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 4 13:12:37.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9329" for this suite. Feb 4 13:12:43.157: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 13:12:43.266: INFO: namespace dns-9329 deletion completed in 6.135275995s • [SLOW TEST:30.489 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 4 13:12:43.266: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating secret secrets-6257/secret-test-58076e9c-482b-4aa9-baf5-47a60e467bcc STEP: Creating a pod to test consume secrets Feb 4 13:12:43.416: INFO: Waiting up to 5m0s for pod "pod-configmaps-6ba97ed6-cefb-496f-bb6d-fe3eb013247b" in namespace "secrets-6257" to be "success or failure" Feb 4 13:12:43.426: INFO: Pod "pod-configmaps-6ba97ed6-cefb-496f-bb6d-fe3eb013247b": Phase="Pending", Reason="", readiness=false. Elapsed: 9.576677ms Feb 4 13:12:45.432: INFO: Pod "pod-configmaps-6ba97ed6-cefb-496f-bb6d-fe3eb013247b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0162242s Feb 4 13:12:47.438: INFO: Pod "pod-configmaps-6ba97ed6-cefb-496f-bb6d-fe3eb013247b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.021937976s Feb 4 13:12:49.448: INFO: Pod "pod-configmaps-6ba97ed6-cefb-496f-bb6d-fe3eb013247b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.032211934s Feb 4 13:12:51.459: INFO: Pod "pod-configmaps-6ba97ed6-cefb-496f-bb6d-fe3eb013247b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.042753215s Feb 4 13:12:53.470: INFO: Pod "pod-configmaps-6ba97ed6-cefb-496f-bb6d-fe3eb013247b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.053752848s STEP: Saw pod success Feb 4 13:12:53.470: INFO: Pod "pod-configmaps-6ba97ed6-cefb-496f-bb6d-fe3eb013247b" satisfied condition "success or failure" Feb 4 13:12:53.476: INFO: Trying to get logs from node iruya-node pod pod-configmaps-6ba97ed6-cefb-496f-bb6d-fe3eb013247b container env-test: STEP: delete the pod Feb 4 13:12:53.538: INFO: Waiting for pod pod-configmaps-6ba97ed6-cefb-496f-bb6d-fe3eb013247b to disappear Feb 4 13:12:53.541: INFO: Pod pod-configmaps-6ba97ed6-cefb-496f-bb6d-fe3eb013247b no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 4 13:12:53.541: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6257" for this suite. Feb 4 13:12:59.566: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 13:12:59.721: INFO: namespace secrets-6257 deletion completed in 6.176348587s • [SLOW TEST:16.455 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 4 13:12:59.722: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Feb 4 13:12:59.783: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Feb 4 13:12:59.814: INFO: Pod name sample-pod: Found 0 pods out of 1 Feb 4 13:13:04.850: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Feb 4 13:13:10.877: INFO: Creating deployment "test-rolling-update-deployment" Feb 4 13:13:10.887: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Feb 4 13:13:10.904: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Feb 4 13:13:12.919: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Feb 4 13:13:12.922: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716418791, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716418791, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716418791, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716418790, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 4 13:13:14.957: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716418791, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716418791, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716418791, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716418790, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 4 13:13:16.933: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716418791, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716418791, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716418791, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716418790, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 4 13:13:18.939: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Feb 4 13:13:18.960: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:deployment-5562,SelfLink:/apis/apps/v1/namespaces/deployment-5562/deployments/test-rolling-update-deployment,UID:0da5216f-cde3-4450-9111-eb477b0b2f56,ResourceVersion:23065804,Generation:1,CreationTimestamp:2020-02-04 13:13:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-02-04 13:13:11 +0000 UTC 2020-02-04 13:13:11 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-02-04 13:13:18 +0000 UTC 2020-02-04 13:13:10 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-79f6b9d75c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Feb 4 13:13:18.965: INFO: New ReplicaSet "test-rolling-update-deployment-79f6b9d75c" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c,GenerateName:,Namespace:deployment-5562,SelfLink:/apis/apps/v1/namespaces/deployment-5562/replicasets/test-rolling-update-deployment-79f6b9d75c,UID:da92b74f-c565-4187-8cda-6a0fadccb992,ResourceVersion:23065793,Generation:1,CreationTimestamp:2020-02-04 13:13:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 0da5216f-cde3-4450-9111-eb477b0b2f56 0xc001d46977 0xc001d46978}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Feb 4 13:13:18.965: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Feb 4 13:13:18.965: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:deployment-5562,SelfLink:/apis/apps/v1/namespaces/deployment-5562/replicasets/test-rolling-update-controller,UID:d4f7ca56-4b25-4be2-a239-b1fa4789ae84,ResourceVersion:23065803,Generation:2,CreationTimestamp:2020-02-04 13:12:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 0da5216f-cde3-4450-9111-eb477b0b2f56 0xc001d468a7 0xc001d468a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Feb 4 13:13:18.970: INFO: Pod "test-rolling-update-deployment-79f6b9d75c-ffss6" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c-ffss6,GenerateName:test-rolling-update-deployment-79f6b9d75c-,Namespace:deployment-5562,SelfLink:/api/v1/namespaces/deployment-5562/pods/test-rolling-update-deployment-79f6b9d75c-ffss6,UID:8286272d-a5d2-4815-a0d5-9c2a344bfb66,ResourceVersion:23065792,Generation:0,CreationTimestamp:2020-02-04 13:13:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-79f6b9d75c da92b74f-c565-4187-8cda-6a0fadccb992 0xc001d47267 0xc001d47268}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-g5hd7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-g5hd7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-g5hd7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001d472e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001d47300}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 13:13:11 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 13:13:17 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 13:13:17 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 13:13:11 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2020-02-04 13:13:11 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-02-04 13:13:17 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://dd8f86c9b4d16062e5de50fdaff72ddaf231ef7b737b95cf8883576fea7d808c}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 4 13:13:18.971: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-5562" for this suite. Feb 4 13:13:25.115: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 13:13:25.265: INFO: namespace deployment-5562 deletion completed in 6.289467739s • [SLOW TEST:25.543 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 4 13:13:25.266: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir volume type on tmpfs Feb 4 13:13:26.503: INFO: Waiting up to 5m0s for pod "pod-aff52977-0397-4356-b0ab-be47d92f371b" in namespace "emptydir-8572" to be "success or failure" Feb 4 13:13:26.520: INFO: Pod "pod-aff52977-0397-4356-b0ab-be47d92f371b": Phase="Pending", Reason="", readiness=false. Elapsed: 17.079283ms Feb 4 13:13:28.536: INFO: Pod "pod-aff52977-0397-4356-b0ab-be47d92f371b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032662402s Feb 4 13:13:30.584: INFO: Pod "pod-aff52977-0397-4356-b0ab-be47d92f371b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.081039051s Feb 4 13:13:32.605: INFO: Pod "pod-aff52977-0397-4356-b0ab-be47d92f371b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.101460351s Feb 4 13:13:34.625: INFO: Pod "pod-aff52977-0397-4356-b0ab-be47d92f371b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.121391031s Feb 4 13:13:36.644: INFO: Pod "pod-aff52977-0397-4356-b0ab-be47d92f371b": Phase="Pending", Reason="", readiness=false. Elapsed: 10.140878933s Feb 4 13:13:38.664: INFO: Pod "pod-aff52977-0397-4356-b0ab-be47d92f371b": Phase="Pending", Reason="", readiness=false. Elapsed: 12.160396552s Feb 4 13:13:40.673: INFO: Pod "pod-aff52977-0397-4356-b0ab-be47d92f371b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.169979834s STEP: Saw pod success Feb 4 13:13:40.673: INFO: Pod "pod-aff52977-0397-4356-b0ab-be47d92f371b" satisfied condition "success or failure" Feb 4 13:13:40.677: INFO: Trying to get logs from node iruya-node pod pod-aff52977-0397-4356-b0ab-be47d92f371b container test-container: STEP: delete the pod Feb 4 13:13:40.871: INFO: Waiting for pod pod-aff52977-0397-4356-b0ab-be47d92f371b to disappear Feb 4 13:13:40.904: INFO: Pod pod-aff52977-0397-4356-b0ab-be47d92f371b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 4 13:13:40.905: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8572" for this suite. Feb 4 13:13:46.985: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 13:13:47.104: INFO: namespace emptydir-8572 deletion completed in 6.144339527s • [SLOW TEST:21.838 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 4 13:13:47.105: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Feb 4 13:13:47.207: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 4 13:14:16.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2810" for this suite. Feb 4 13:14:22.572: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 13:14:22.694: INFO: namespace pods-2810 deletion completed in 6.157213216s • [SLOW TEST:35.589 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 4 13:14:22.694: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Feb 4 13:14:22.827: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Feb 4 13:14:22.838: INFO: Waiting for terminating namespaces to be deleted... Feb 4 13:14:22.842: INFO: Logging pods the kubelet thinks is on node iruya-node before test Feb 4 13:14:22.862: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded) Feb 4 13:14:22.862: INFO: Container weave ready: true, restart count 0 Feb 4 13:14:22.862: INFO: Container weave-npc ready: true, restart count 0 Feb 4 13:14:22.862: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded) Feb 4 13:14:22.862: INFO: Container kube-proxy ready: true, restart count 0 Feb 4 13:14:22.862: INFO: Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test Feb 4 13:14:22.882: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded) Feb 4 13:14:22.882: INFO: Container kube-apiserver ready: true, restart count 0 Feb 4 13:14:22.882: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded) Feb 4 13:14:22.882: INFO: Container kube-scheduler ready: true, restart count 13 Feb 4 13:14:22.882: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded) Feb 4 13:14:22.882: INFO: Container coredns ready: true, restart count 0 Feb 4 13:14:22.882: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded) Feb 4 13:14:22.882: INFO: Container etcd ready: true, restart count 0 Feb 4 13:14:22.882: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded) Feb 4 13:14:22.882: INFO: Container weave ready: true, restart count 0 Feb 4 13:14:22.882: INFO: Container weave-npc ready: true, restart count 0 Feb 4 13:14:22.882: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded) Feb 4 13:14:22.882: INFO: Container coredns ready: true, restart count 0 Feb 4 13:14:22.882: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded) Feb 4 13:14:22.882: INFO: Container kube-controller-manager ready: true, restart count 20 Feb 4 13:14:22.882: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded) Feb 4 13:14:22.882: INFO: Container kube-proxy ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: verifying the node has the label node iruya-node STEP: verifying the node has the label node iruya-server-sfge57q7djm7 Feb 4 13:14:23.029: INFO: Pod coredns-5c98db65d4-bm4gs requesting resource cpu=100m on Node iruya-server-sfge57q7djm7 Feb 4 13:14:23.029: INFO: Pod coredns-5c98db65d4-xx8w8 requesting resource cpu=100m on Node iruya-server-sfge57q7djm7 Feb 4 13:14:23.029: INFO: Pod etcd-iruya-server-sfge57q7djm7 requesting resource cpu=0m on Node iruya-server-sfge57q7djm7 Feb 4 13:14:23.029: INFO: Pod kube-apiserver-iruya-server-sfge57q7djm7 requesting resource cpu=250m on Node iruya-server-sfge57q7djm7 Feb 4 13:14:23.029: INFO: Pod kube-controller-manager-iruya-server-sfge57q7djm7 requesting resource cpu=200m on Node iruya-server-sfge57q7djm7 Feb 4 13:14:23.029: INFO: Pod kube-proxy-58v95 requesting resource cpu=0m on Node iruya-server-sfge57q7djm7 Feb 4 13:14:23.029: INFO: Pod kube-proxy-976zl requesting resource cpu=0m on Node iruya-node Feb 4 13:14:23.029: INFO: Pod kube-scheduler-iruya-server-sfge57q7djm7 requesting resource cpu=100m on Node iruya-server-sfge57q7djm7 Feb 4 13:14:23.029: INFO: Pod weave-net-bzl4d requesting resource cpu=20m on Node iruya-server-sfge57q7djm7 Feb 4 13:14:23.029: INFO: Pod weave-net-rlp57 requesting resource cpu=20m on Node iruya-node STEP: Starting Pods to consume most of the cluster CPU. STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-a94c34bf-7230-4a42-87d9-b3f30997f8ec.15f0354a7a82f0f1], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4006/filler-pod-a94c34bf-7230-4a42-87d9-b3f30997f8ec to iruya-node] STEP: Considering event: Type = [Normal], Name = [filler-pod-a94c34bf-7230-4a42-87d9-b3f30997f8ec.15f0354ba3200238], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-a94c34bf-7230-4a42-87d9-b3f30997f8ec.15f0354ca7432a59], Reason = [Created], Message = [Created container filler-pod-a94c34bf-7230-4a42-87d9-b3f30997f8ec] STEP: Considering event: Type = [Normal], Name = [filler-pod-a94c34bf-7230-4a42-87d9-b3f30997f8ec.15f0354ccca0d872], Reason = [Started], Message = [Started container filler-pod-a94c34bf-7230-4a42-87d9-b3f30997f8ec] STEP: Considering event: Type = [Normal], Name = [filler-pod-ac94dd2a-b258-4e2c-a7f3-c030851532d3.15f0354a73a0d0b0], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4006/filler-pod-ac94dd2a-b258-4e2c-a7f3-c030851532d3 to iruya-server-sfge57q7djm7] STEP: Considering event: Type = [Normal], Name = [filler-pod-ac94dd2a-b258-4e2c-a7f3-c030851532d3.15f0354ba2296bd0], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-ac94dd2a-b258-4e2c-a7f3-c030851532d3.15f0354c8628deaf], Reason = [Created], Message = [Created container filler-pod-ac94dd2a-b258-4e2c-a7f3-c030851532d3] STEP: Considering event: Type = [Normal], Name = [filler-pod-ac94dd2a-b258-4e2c-a7f3-c030851532d3.15f0354cb78be3e9], Reason = [Started], Message = [Started container filler-pod-ac94dd2a-b258-4e2c-a7f3-c030851532d3] STEP: Considering event: Type = [Warning], Name = [additional-pod.15f0354d487707cf], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 Insufficient cpu.] STEP: removing the label node off the node iruya-node STEP: verifying the node doesn't have the label node STEP: removing the label node off the node iruya-server-sfge57q7djm7 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 4 13:14:36.424: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-4006" for this suite. Feb 4 13:14:42.521: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 13:14:42.716: INFO: namespace sched-pred-4006 deletion completed in 6.281327209s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:20.022 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 4 13:14:42.717: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 4 13:15:00.553: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-6682" for this suite. Feb 4 13:15:06.587: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 13:15:06.696: INFO: namespace kubelet-test-6682 deletion completed in 6.133130814s • [SLOW TEST:23.979 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 4 13:15:06.696: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-c660cda4-aa86-4997-bd79-d22128f3e294 in namespace container-probe-9259 Feb 4 13:15:16.914: INFO: Started pod busybox-c660cda4-aa86-4997-bd79-d22128f3e294 in namespace container-probe-9259 STEP: checking the pod's current state and verifying that restartCount is present Feb 4 13:15:16.923: INFO: Initial restart count of pod busybox-c660cda4-aa86-4997-bd79-d22128f3e294 is 0 Feb 4 13:16:07.254: INFO: Restart count of pod container-probe-9259/busybox-c660cda4-aa86-4997-bd79-d22128f3e294 is now 1 (50.330392293s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 4 13:16:07.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9259" for this suite. Feb 4 13:16:13.424: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 13:16:13.639: INFO: namespace container-probe-9259 deletion completed in 6.281647591s • [SLOW TEST:66.942 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 4 13:16:13.639: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Feb 4 13:16:13.750: INFO: Waiting up to 5m0s for pod "downward-api-93c6d8c6-aadc-4ed1-9e52-6f4dde59248c" in namespace "downward-api-4353" to be "success or failure" Feb 4 13:16:13.760: INFO: Pod "downward-api-93c6d8c6-aadc-4ed1-9e52-6f4dde59248c": Phase="Pending", Reason="", readiness=false. Elapsed: 10.589947ms Feb 4 13:16:15.782: INFO: Pod "downward-api-93c6d8c6-aadc-4ed1-9e52-6f4dde59248c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032214321s Feb 4 13:16:17.792: INFO: Pod "downward-api-93c6d8c6-aadc-4ed1-9e52-6f4dde59248c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042515325s Feb 4 13:16:19.815: INFO: Pod "downward-api-93c6d8c6-aadc-4ed1-9e52-6f4dde59248c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.06465802s Feb 4 13:16:21.826: INFO: Pod "downward-api-93c6d8c6-aadc-4ed1-9e52-6f4dde59248c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.075929966s Feb 4 13:16:23.868: INFO: Pod "downward-api-93c6d8c6-aadc-4ed1-9e52-6f4dde59248c": Phase="Pending", Reason="", readiness=false. Elapsed: 10.117656159s Feb 4 13:16:25.890: INFO: Pod "downward-api-93c6d8c6-aadc-4ed1-9e52-6f4dde59248c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.140174991s STEP: Saw pod success Feb 4 13:16:25.890: INFO: Pod "downward-api-93c6d8c6-aadc-4ed1-9e52-6f4dde59248c" satisfied condition "success or failure" Feb 4 13:16:25.898: INFO: Trying to get logs from node iruya-node pod downward-api-93c6d8c6-aadc-4ed1-9e52-6f4dde59248c container dapi-container: STEP: delete the pod Feb 4 13:16:26.062: INFO: Waiting for pod downward-api-93c6d8c6-aadc-4ed1-9e52-6f4dde59248c to disappear Feb 4 13:16:26.081: INFO: Pod downward-api-93c6d8c6-aadc-4ed1-9e52-6f4dde59248c no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 4 13:16:26.081: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4353" for this suite. Feb 4 13:16:32.139: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 13:16:32.264: INFO: namespace downward-api-4353 deletion completed in 6.173823865s • [SLOW TEST:18.625 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 4 13:16:32.265: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-9275 STEP: creating a selector STEP: Creating the service pods in kubernetes Feb 4 13:16:32.358: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Feb 4 13:17:12.646: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=udp&host=10.44.0.1&port=8081&tries=1'] Namespace:pod-network-test-9275 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 4 13:17:12.646: INFO: >>> kubeConfig: /root/.kube/config I0204 13:17:12.736250 8 log.go:172] (0xc0009b66e0) (0xc001c57400) Create stream I0204 13:17:12.736442 8 log.go:172] (0xc0009b66e0) (0xc001c57400) Stream added, broadcasting: 1 I0204 13:17:12.747534 8 log.go:172] (0xc0009b66e0) Reply frame received for 1 I0204 13:17:12.747619 8 log.go:172] (0xc0009b66e0) (0xc001b6c960) Create stream I0204 13:17:12.747634 8 log.go:172] (0xc0009b66e0) (0xc001b6c960) Stream added, broadcasting: 3 I0204 13:17:12.749580 8 log.go:172] (0xc0009b66e0) Reply frame received for 3 I0204 13:17:12.749605 8 log.go:172] (0xc0009b66e0) (0xc0016fc3c0) Create stream I0204 13:17:12.749614 8 log.go:172] (0xc0009b66e0) (0xc0016fc3c0) Stream added, broadcasting: 5 I0204 13:17:12.750893 8 log.go:172] (0xc0009b66e0) Reply frame received for 5 I0204 13:17:12.909110 8 log.go:172] (0xc0009b66e0) Data frame received for 3 I0204 13:17:12.909200 8 log.go:172] (0xc001b6c960) (3) Data frame handling I0204 13:17:12.909229 8 log.go:172] (0xc001b6c960) (3) Data frame sent I0204 13:17:13.051036 8 log.go:172] (0xc0009b66e0) (0xc0016fc3c0) Stream removed, broadcasting: 5 I0204 13:17:13.051173 8 log.go:172] (0xc0009b66e0) Data frame received for 1 I0204 13:17:13.051223 8 log.go:172] (0xc001c57400) (1) Data frame handling I0204 13:17:13.051313 8 log.go:172] (0xc001c57400) (1) Data frame sent I0204 13:17:13.051344 8 log.go:172] (0xc0009b66e0) (0xc001b6c960) Stream removed, broadcasting: 3 I0204 13:17:13.051414 8 log.go:172] (0xc0009b66e0) (0xc001c57400) Stream removed, broadcasting: 1 I0204 13:17:13.051468 8 log.go:172] (0xc0009b66e0) Go away received I0204 13:17:13.051762 8 log.go:172] (0xc0009b66e0) (0xc001c57400) Stream removed, broadcasting: 1 I0204 13:17:13.051790 8 log.go:172] (0xc0009b66e0) (0xc001b6c960) Stream removed, broadcasting: 3 I0204 13:17:13.051798 8 log.go:172] (0xc0009b66e0) (0xc0016fc3c0) Stream removed, broadcasting: 5 Feb 4 13:17:13.051: INFO: Waiting for endpoints: map[] Feb 4 13:17:13.063: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:pod-network-test-9275 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 4 13:17:13.063: INFO: >>> kubeConfig: /root/.kube/config I0204 13:17:13.109068 8 log.go:172] (0xc00145c420) (0xc001d68500) Create stream I0204 13:17:13.109095 8 log.go:172] (0xc00145c420) (0xc001d68500) Stream added, broadcasting: 1 I0204 13:17:13.116028 8 log.go:172] (0xc00145c420) Reply frame received for 1 I0204 13:17:13.116059 8 log.go:172] (0xc00145c420) (0xc001fc7540) Create stream I0204 13:17:13.116071 8 log.go:172] (0xc00145c420) (0xc001fc7540) Stream added, broadcasting: 3 I0204 13:17:13.117526 8 log.go:172] (0xc00145c420) Reply frame received for 3 I0204 13:17:13.117559 8 log.go:172] (0xc00145c420) (0xc001c574a0) Create stream I0204 13:17:13.117573 8 log.go:172] (0xc00145c420) (0xc001c574a0) Stream added, broadcasting: 5 I0204 13:17:13.119514 8 log.go:172] (0xc00145c420) Reply frame received for 5 I0204 13:17:13.223970 8 log.go:172] (0xc00145c420) Data frame received for 3 I0204 13:17:13.224006 8 log.go:172] (0xc001fc7540) (3) Data frame handling I0204 13:17:13.224026 8 log.go:172] (0xc001fc7540) (3) Data frame sent I0204 13:17:13.353286 8 log.go:172] (0xc00145c420) Data frame received for 1 I0204 13:17:13.353388 8 log.go:172] (0xc001d68500) (1) Data frame handling I0204 13:17:13.353416 8 log.go:172] (0xc001d68500) (1) Data frame sent I0204 13:17:13.354994 8 log.go:172] (0xc00145c420) (0xc001d68500) Stream removed, broadcasting: 1 I0204 13:17:13.355375 8 log.go:172] (0xc00145c420) (0xc001fc7540) Stream removed, broadcasting: 3 I0204 13:17:13.355567 8 log.go:172] (0xc00145c420) (0xc001c574a0) Stream removed, broadcasting: 5 I0204 13:17:13.355618 8 log.go:172] (0xc00145c420) (0xc001d68500) Stream removed, broadcasting: 1 I0204 13:17:13.355634 8 log.go:172] (0xc00145c420) (0xc001fc7540) Stream removed, broadcasting: 3 I0204 13:17:13.355667 8 log.go:172] (0xc00145c420) (0xc001c574a0) Stream removed, broadcasting: 5 I0204 13:17:13.355908 8 log.go:172] (0xc00145c420) Go away received Feb 4 13:17:13.356: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 4 13:17:13.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-9275" for this suite. Feb 4 13:17:37.395: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 13:17:37.524: INFO: namespace pod-network-test-9275 deletion completed in 24.156795214s • [SLOW TEST:65.259 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 4 13:17:37.524: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Feb 4 13:17:48.868: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 4 13:17:48.907: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-4472" for this suite. Feb 4 13:17:54.954: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 13:17:55.181: INFO: namespace container-runtime-4472 deletion completed in 6.266950663s • [SLOW TEST:17.657 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 4 13:17:55.183: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: validating cluster-info Feb 4 13:17:55.341: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' Feb 4 13:17:57.683: INFO: stderr: "" Feb 4 13:17:57.683: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.57:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.57:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 4 13:17:57.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9016" for this suite. Feb 4 13:18:03.719: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 13:18:03.850: INFO: namespace kubectl-9016 deletion completed in 6.156982627s • [SLOW TEST:8.668 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl cluster-info /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 4 13:18:03.852: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: validating api versions Feb 4 13:18:03.932: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' Feb 4 13:18:04.104: INFO: stderr: "" Feb 4 13:18:04.104: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 4 13:18:04.104: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3463" for this suite. Feb 4 13:18:10.127: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 13:18:10.254: INFO: namespace kubectl-3463 deletion completed in 6.143423189s • [SLOW TEST:6.402 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl api-versions /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 4 13:18:10.255: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Feb 4 13:18:10.410: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3786ecb4-9454-40c9-ad8d-5dd8054e8b0d" in namespace "projected-7962" to be "success or failure" Feb 4 13:18:10.458: INFO: Pod "downwardapi-volume-3786ecb4-9454-40c9-ad8d-5dd8054e8b0d": Phase="Pending", Reason="", readiness=false. Elapsed: 47.96622ms Feb 4 13:18:12.526: INFO: Pod "downwardapi-volume-3786ecb4-9454-40c9-ad8d-5dd8054e8b0d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.116147217s Feb 4 13:18:14.543: INFO: Pod "downwardapi-volume-3786ecb4-9454-40c9-ad8d-5dd8054e8b0d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.132780896s Feb 4 13:18:16.560: INFO: Pod "downwardapi-volume-3786ecb4-9454-40c9-ad8d-5dd8054e8b0d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.149671038s Feb 4 13:18:18.575: INFO: Pod "downwardapi-volume-3786ecb4-9454-40c9-ad8d-5dd8054e8b0d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.164919186s Feb 4 13:18:20.586: INFO: Pod "downwardapi-volume-3786ecb4-9454-40c9-ad8d-5dd8054e8b0d": Phase="Pending", Reason="", readiness=false. Elapsed: 10.176133985s Feb 4 13:18:22.605: INFO: Pod "downwardapi-volume-3786ecb4-9454-40c9-ad8d-5dd8054e8b0d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.194483619s STEP: Saw pod success Feb 4 13:18:22.605: INFO: Pod "downwardapi-volume-3786ecb4-9454-40c9-ad8d-5dd8054e8b0d" satisfied condition "success or failure" Feb 4 13:18:22.610: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-3786ecb4-9454-40c9-ad8d-5dd8054e8b0d container client-container: STEP: delete the pod Feb 4 13:18:22.852: INFO: Waiting for pod downwardapi-volume-3786ecb4-9454-40c9-ad8d-5dd8054e8b0d to disappear Feb 4 13:18:22.860: INFO: Pod downwardapi-volume-3786ecb4-9454-40c9-ad8d-5dd8054e8b0d no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 4 13:18:22.861: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7962" for this suite. Feb 4 13:18:28.896: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 13:18:29.040: INFO: namespace projected-7962 deletion completed in 6.172546123s • [SLOW TEST:18.785 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 4 13:18:29.041: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test env composition Feb 4 13:18:29.143: INFO: Waiting up to 5m0s for pod "var-expansion-24e72ed5-4ec5-4789-bd63-f5d37770ae69" in namespace "var-expansion-1843" to be "success or failure" Feb 4 13:18:29.153: INFO: Pod "var-expansion-24e72ed5-4ec5-4789-bd63-f5d37770ae69": Phase="Pending", Reason="", readiness=false. Elapsed: 9.857953ms Feb 4 13:18:31.166: INFO: Pod "var-expansion-24e72ed5-4ec5-4789-bd63-f5d37770ae69": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022714778s Feb 4 13:18:33.179: INFO: Pod "var-expansion-24e72ed5-4ec5-4789-bd63-f5d37770ae69": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035516612s Feb 4 13:18:35.185: INFO: Pod "var-expansion-24e72ed5-4ec5-4789-bd63-f5d37770ae69": Phase="Pending", Reason="", readiness=false. Elapsed: 6.042332337s Feb 4 13:18:37.193: INFO: Pod "var-expansion-24e72ed5-4ec5-4789-bd63-f5d37770ae69": Phase="Pending", Reason="", readiness=false. Elapsed: 8.049592775s Feb 4 13:18:39.200: INFO: Pod "var-expansion-24e72ed5-4ec5-4789-bd63-f5d37770ae69": Phase="Pending", Reason="", readiness=false. Elapsed: 10.056485119s Feb 4 13:18:41.205: INFO: Pod "var-expansion-24e72ed5-4ec5-4789-bd63-f5d37770ae69": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.062392776s STEP: Saw pod success Feb 4 13:18:41.206: INFO: Pod "var-expansion-24e72ed5-4ec5-4789-bd63-f5d37770ae69" satisfied condition "success or failure" Feb 4 13:18:41.208: INFO: Trying to get logs from node iruya-node pod var-expansion-24e72ed5-4ec5-4789-bd63-f5d37770ae69 container dapi-container: STEP: delete the pod Feb 4 13:18:41.319: INFO: Waiting for pod var-expansion-24e72ed5-4ec5-4789-bd63-f5d37770ae69 to disappear Feb 4 13:18:41.332: INFO: Pod var-expansion-24e72ed5-4ec5-4789-bd63-f5d37770ae69 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 4 13:18:41.332: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-1843" for this suite. Feb 4 13:18:47.401: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 13:18:47.541: INFO: namespace var-expansion-1843 deletion completed in 6.204638258s • [SLOW TEST:18.501 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 4 13:18:47.541: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service endpoint-test2 in namespace services-9001 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9001 to expose endpoints map[] Feb 4 13:18:47.734: INFO: Get endpoints failed (10.027859ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Feb 4 13:18:48.748: INFO: successfully validated that service endpoint-test2 in namespace services-9001 exposes endpoints map[] (1.023870542s elapsed) STEP: Creating pod pod1 in namespace services-9001 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9001 to expose endpoints map[pod1:[80]] Feb 4 13:18:52.861: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.07952531s elapsed, will retry) Feb 4 13:18:57.946: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (9.164635264s elapsed, will retry) Feb 4 13:19:01.036: INFO: successfully validated that service endpoint-test2 in namespace services-9001 exposes endpoints map[pod1:[80]] (12.254413114s elapsed) STEP: Creating pod pod2 in namespace services-9001 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9001 to expose endpoints map[pod1:[80] pod2:[80]] Feb 4 13:19:05.719: INFO: Unexpected endpoints: found map[4ee23350-9925-411d-b3b0-0166bbfbf312:[80]], expected map[pod1:[80] pod2:[80]] (4.679061116s elapsed, will retry) Feb 4 13:19:10.438: INFO: successfully validated that service endpoint-test2 in namespace services-9001 exposes endpoints map[pod1:[80] pod2:[80]] (9.39805126s elapsed) STEP: Deleting pod pod1 in namespace services-9001 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9001 to expose endpoints map[pod2:[80]] Feb 4 13:19:11.515: INFO: successfully validated that service endpoint-test2 in namespace services-9001 exposes endpoints map[pod2:[80]] (1.067466112s elapsed) STEP: Deleting pod pod2 in namespace services-9001 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9001 to expose endpoints map[] Feb 4 13:19:12.551: INFO: successfully validated that service endpoint-test2 in namespace services-9001 exposes endpoints map[] (1.028387967s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 4 13:19:13.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9001" for this suite. Feb 4 13:19:35.939: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 13:19:36.072: INFO: namespace services-9001 deletion completed in 22.416180837s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:48.531 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 4 13:19:36.073: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Feb 4 13:19:36.262: INFO: Number of nodes with available pods: 0 Feb 4 13:19:36.262: INFO: Node iruya-node is running more than one daemon pod Feb 4 13:19:37.276: INFO: Number of nodes with available pods: 0 Feb 4 13:19:37.276: INFO: Node iruya-node is running more than one daemon pod Feb 4 13:19:38.535: INFO: Number of nodes with available pods: 0 Feb 4 13:19:38.535: INFO: Node iruya-node is running more than one daemon pod Feb 4 13:19:39.281: INFO: Number of nodes with available pods: 0 Feb 4 13:19:39.281: INFO: Node iruya-node is running more than one daemon pod Feb 4 13:19:40.309: INFO: Number of nodes with available pods: 0 Feb 4 13:19:40.309: INFO: Node iruya-node is running more than one daemon pod Feb 4 13:19:41.929: INFO: Number of nodes with available pods: 0 Feb 4 13:19:41.929: INFO: Node iruya-node is running more than one daemon pod Feb 4 13:19:42.420: INFO: Number of nodes with available pods: 0 Feb 4 13:19:42.420: INFO: Node iruya-node is running more than one daemon pod Feb 4 13:19:43.277: INFO: Number of nodes with available pods: 0 Feb 4 13:19:43.277: INFO: Node iruya-node is running more than one daemon pod Feb 4 13:19:44.274: INFO: Number of nodes with available pods: 0 Feb 4 13:19:44.274: INFO: Node iruya-node is running more than one daemon pod Feb 4 13:19:45.276: INFO: Number of nodes with available pods: 0 Feb 4 13:19:45.276: INFO: Node iruya-node is running more than one daemon pod Feb 4 13:19:46.282: INFO: Number of nodes with available pods: 2 Feb 4 13:19:46.282: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Feb 4 13:19:46.331: INFO: Number of nodes with available pods: 1 Feb 4 13:19:46.332: INFO: Node iruya-node is running more than one daemon pod Feb 4 13:19:47.348: INFO: Number of nodes with available pods: 1 Feb 4 13:19:47.348: INFO: Node iruya-node is running more than one daemon pod Feb 4 13:19:48.349: INFO: Number of nodes with available pods: 1 Feb 4 13:19:48.350: INFO: Node iruya-node is running more than one daemon pod Feb 4 13:19:49.352: INFO: Number of nodes with available pods: 1 Feb 4 13:19:49.352: INFO: Node iruya-node is running more than one daemon pod Feb 4 13:19:50.349: INFO: Number of nodes with available pods: 1 Feb 4 13:19:50.349: INFO: Node iruya-node is running more than one daemon pod Feb 4 13:19:51.348: INFO: Number of nodes with available pods: 1 Feb 4 13:19:51.348: INFO: Node iruya-node is running more than one daemon pod Feb 4 13:19:52.343: INFO: Number of nodes with available pods: 1 Feb 4 13:19:52.343: INFO: Node iruya-node is running more than one daemon pod Feb 4 13:19:53.353: INFO: Number of nodes with available pods: 1 Feb 4 13:19:53.353: INFO: Node iruya-node is running more than one daemon pod Feb 4 13:19:54.348: INFO: Number of nodes with available pods: 1 Feb 4 13:19:54.349: INFO: Node iruya-node is running more than one daemon pod Feb 4 13:19:55.347: INFO: Number of nodes with available pods: 1 Feb 4 13:19:55.347: INFO: Node iruya-node is running more than one daemon pod Feb 4 13:19:56.350: INFO: Number of nodes with available pods: 1 Feb 4 13:19:56.350: INFO: Node iruya-node is running more than one daemon pod Feb 4 13:19:57.369: INFO: Number of nodes with available pods: 1 Feb 4 13:19:57.369: INFO: Node iruya-node is running more than one daemon pod Feb 4 13:19:58.350: INFO: Number of nodes with available pods: 1 Feb 4 13:19:58.350: INFO: Node iruya-node is running more than one daemon pod Feb 4 13:19:59.355: INFO: Number of nodes with available pods: 1 Feb 4 13:19:59.355: INFO: Node iruya-node is running more than one daemon pod Feb 4 13:20:00.359: INFO: Number of nodes with available pods: 1 Feb 4 13:20:00.359: INFO: Node iruya-node is running more than one daemon pod Feb 4 13:20:01.353: INFO: Number of nodes with available pods: 1 Feb 4 13:20:01.353: INFO: Node iruya-node is running more than one daemon pod Feb 4 13:20:02.387: INFO: Number of nodes with available pods: 1 Feb 4 13:20:02.387: INFO: Node iruya-node is running more than one daemon pod Feb 4 13:20:03.348: INFO: Number of nodes with available pods: 1 Feb 4 13:20:03.348: INFO: Node iruya-node is running more than one daemon pod Feb 4 13:20:04.434: INFO: Number of nodes with available pods: 2 Feb 4 13:20:04.434: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4653, will wait for the garbage collector to delete the pods Feb 4 13:20:04.512: INFO: Deleting DaemonSet.extensions daemon-set took: 19.679491ms Feb 4 13:20:04.813: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.690999ms Feb 4 13:20:16.621: INFO: Number of nodes with available pods: 0 Feb 4 13:20:16.621: INFO: Number of running nodes: 0, number of available pods: 0 Feb 4 13:20:16.624: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4653/daemonsets","resourceVersion":"23066814"},"items":null} Feb 4 13:20:16.626: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4653/pods","resourceVersion":"23066814"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 4 13:20:16.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-4653" for this suite. Feb 4 13:20:22.689: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 13:20:22.799: INFO: namespace daemonsets-4653 deletion completed in 6.154455606s • [SLOW TEST:46.726 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 4 13:20:22.799: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Feb 4 13:20:22.892: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e0299000-7670-420d-8416-bd52d2847c1c" in namespace "downward-api-7252" to be "success or failure" Feb 4 13:20:22.911: INFO: Pod "downwardapi-volume-e0299000-7670-420d-8416-bd52d2847c1c": Phase="Pending", Reason="", readiness=false. Elapsed: 19.147211ms Feb 4 13:20:24.921: INFO: Pod "downwardapi-volume-e0299000-7670-420d-8416-bd52d2847c1c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0293413s Feb 4 13:20:26.927: INFO: Pod "downwardapi-volume-e0299000-7670-420d-8416-bd52d2847c1c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035694593s Feb 4 13:20:28.942: INFO: Pod "downwardapi-volume-e0299000-7670-420d-8416-bd52d2847c1c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.05028151s Feb 4 13:20:30.955: INFO: Pod "downwardapi-volume-e0299000-7670-420d-8416-bd52d2847c1c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.062960601s STEP: Saw pod success Feb 4 13:20:30.955: INFO: Pod "downwardapi-volume-e0299000-7670-420d-8416-bd52d2847c1c" satisfied condition "success or failure" Feb 4 13:20:30.959: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-e0299000-7670-420d-8416-bd52d2847c1c container client-container: STEP: delete the pod Feb 4 13:20:31.021: INFO: Waiting for pod downwardapi-volume-e0299000-7670-420d-8416-bd52d2847c1c to disappear Feb 4 13:20:31.024: INFO: Pod downwardapi-volume-e0299000-7670-420d-8416-bd52d2847c1c no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 4 13:20:31.024: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7252" for this suite. Feb 4 13:20:37.063: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 13:20:37.211: INFO: namespace downward-api-7252 deletion completed in 6.182837154s • [SLOW TEST:14.412 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 4 13:20:37.212: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-e5eb5262-d4c5-48d7-98cc-26e4d46b8556 STEP: Creating a pod to test consume secrets Feb 4 13:20:37.468: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-5613afdd-951c-4cb0-a5fa-9b260e169189" in namespace "projected-2820" to be "success or failure" Feb 4 13:20:37.507: INFO: Pod "pod-projected-secrets-5613afdd-951c-4cb0-a5fa-9b260e169189": Phase="Pending", Reason="", readiness=false. Elapsed: 38.313239ms Feb 4 13:20:39.518: INFO: Pod "pod-projected-secrets-5613afdd-951c-4cb0-a5fa-9b260e169189": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049344032s Feb 4 13:20:41.548: INFO: Pod "pod-projected-secrets-5613afdd-951c-4cb0-a5fa-9b260e169189": Phase="Pending", Reason="", readiness=false. Elapsed: 4.079393439s Feb 4 13:20:43.556: INFO: Pod "pod-projected-secrets-5613afdd-951c-4cb0-a5fa-9b260e169189": Phase="Pending", Reason="", readiness=false. Elapsed: 6.087413615s Feb 4 13:20:45.567: INFO: Pod "pod-projected-secrets-5613afdd-951c-4cb0-a5fa-9b260e169189": Phase="Pending", Reason="", readiness=false. Elapsed: 8.098128304s Feb 4 13:20:47.577: INFO: Pod "pod-projected-secrets-5613afdd-951c-4cb0-a5fa-9b260e169189": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.108644312s STEP: Saw pod success Feb 4 13:20:47.577: INFO: Pod "pod-projected-secrets-5613afdd-951c-4cb0-a5fa-9b260e169189" satisfied condition "success or failure" Feb 4 13:20:47.582: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-5613afdd-951c-4cb0-a5fa-9b260e169189 container projected-secret-volume-test: STEP: delete the pod Feb 4 13:20:47.660: INFO: Waiting for pod pod-projected-secrets-5613afdd-951c-4cb0-a5fa-9b260e169189 to disappear Feb 4 13:20:47.672: INFO: Pod pod-projected-secrets-5613afdd-951c-4cb0-a5fa-9b260e169189 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 4 13:20:47.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2820" for this suite. Feb 4 13:20:53.712: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 13:20:53.871: INFO: namespace projected-2820 deletion completed in 6.192610903s • [SLOW TEST:16.660 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 4 13:20:53.872: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating all guestbook components Feb 4 13:20:54.038: INFO: apiVersion: v1 kind: Service metadata: name: redis-slave labels: app: redis role: slave tier: backend spec: ports: - port: 6379 selector: app: redis role: slave tier: backend Feb 4 13:20:54.038: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4128' Feb 4 13:20:54.480: INFO: stderr: "" Feb 4 13:20:54.481: INFO: stdout: "service/redis-slave created\n" Feb 4 13:20:54.482: INFO: apiVersion: v1 kind: Service metadata: name: redis-master labels: app: redis role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: redis role: master tier: backend Feb 4 13:20:54.482: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4128' Feb 4 13:20:54.971: INFO: stderr: "" Feb 4 13:20:54.971: INFO: stdout: "service/redis-master created\n" Feb 4 13:20:54.972: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Feb 4 13:20:54.972: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4128' Feb 4 13:20:55.600: INFO: stderr: "" Feb 4 13:20:55.600: INFO: stdout: "service/frontend created\n" Feb 4 13:20:55.601: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v6 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access environment variables to find service host # info, comment out the 'value: dns' line above, and uncomment the # line below: # value: env ports: - containerPort: 80 Feb 4 13:20:55.601: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4128' Feb 4 13:20:56.069: INFO: stderr: "" Feb 4 13:20:56.069: INFO: stdout: "deployment.apps/frontend created\n" Feb 4 13:20:56.070: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-master spec: replicas: 1 selector: matchLabels: app: redis role: master tier: backend template: metadata: labels: app: redis role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/redis:1.0 resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Feb 4 13:20:56.070: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4128' Feb 4 13:20:56.686: INFO: stderr: "" Feb 4 13:20:56.686: INFO: stdout: "deployment.apps/redis-master created\n" Feb 4 13:20:56.687: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-slave spec: replicas: 2 selector: matchLabels: app: redis role: slave tier: backend template: metadata: labels: app: redis role: slave tier: backend spec: containers: - name: slave image: gcr.io/google-samples/gb-redisslave:v3 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access an environment variable to find the master # service's host, comment out the 'value: dns' line above, and # uncomment the line below: # value: env ports: - containerPort: 6379 Feb 4 13:20:56.687: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4128' Feb 4 13:20:57.731: INFO: stderr: "" Feb 4 13:20:57.731: INFO: stdout: "deployment.apps/redis-slave created\n" STEP: validating guestbook app Feb 4 13:20:57.731: INFO: Waiting for all frontend pods to be Running. Feb 4 13:21:22.784: INFO: Waiting for frontend to serve content. Feb 4 13:21:22.967: INFO: Trying to add a new entry to the guestbook. Feb 4 13:21:23.031: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Feb 4 13:21:23.057: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4128' Feb 4 13:21:23.458: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 4 13:21:23.459: INFO: stdout: "service \"redis-slave\" force deleted\n" STEP: using delete to clean up resources Feb 4 13:21:23.460: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4128' Feb 4 13:21:23.697: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 4 13:21:23.697: INFO: stdout: "service \"redis-master\" force deleted\n" STEP: using delete to clean up resources Feb 4 13:21:23.698: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4128' Feb 4 13:21:24.043: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 4 13:21:24.043: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Feb 4 13:21:24.045: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4128' Feb 4 13:21:24.158: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 4 13:21:24.159: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Feb 4 13:21:24.160: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4128' Feb 4 13:21:24.269: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 4 13:21:24.270: INFO: stdout: "deployment.apps \"redis-master\" force deleted\n" STEP: using delete to clean up resources Feb 4 13:21:24.271: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4128' Feb 4 13:21:24.411: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 4 13:21:24.411: INFO: stdout: "deployment.apps \"redis-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 4 13:21:24.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4128" for this suite. Feb 4 13:22:10.616: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 13:22:10.707: INFO: namespace kubectl-4128 deletion completed in 46.286319591s • [SLOW TEST:76.835 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 4 13:22:10.710: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1210 STEP: creating the pod Feb 4 13:22:10.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4214' Feb 4 13:22:11.175: INFO: stderr: "" Feb 4 13:22:11.175: INFO: stdout: "pod/pause created\n" Feb 4 13:22:11.175: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Feb 4 13:22:11.175: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-4214" to be "running and ready" Feb 4 13:22:11.184: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 9.407414ms Feb 4 13:22:13.196: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020985023s Feb 4 13:22:15.206: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031073787s Feb 4 13:22:17.220: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.045344485s Feb 4 13:22:19.232: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 8.05709423s Feb 4 13:22:19.232: INFO: Pod "pause" satisfied condition "running and ready" Feb 4 13:22:19.232: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: adding the label testing-label with value testing-label-value to a pod Feb 4 13:22:19.232: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-4214' Feb 4 13:22:19.424: INFO: stderr: "" Feb 4 13:22:19.424: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Feb 4 13:22:19.425: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-4214' Feb 4 13:22:19.544: INFO: stderr: "" Feb 4 13:22:19.544: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 8s testing-label-value\n" STEP: removing the label testing-label of a pod Feb 4 13:22:19.544: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-4214' Feb 4 13:22:19.706: INFO: stderr: "" Feb 4 13:22:19.706: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Feb 4 13:22:19.707: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-4214' Feb 4 13:22:19.824: INFO: stderr: "" Feb 4 13:22:19.824: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 8s \n" [AfterEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1217 STEP: using delete to clean up resources Feb 4 13:22:19.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4214' Feb 4 13:22:19.964: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 4 13:22:19.965: INFO: stdout: "pod \"pause\" force deleted\n" Feb 4 13:22:19.965: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-4214' Feb 4 13:22:20.121: INFO: stderr: "No resources found.\n" Feb 4 13:22:20.121: INFO: stdout: "" Feb 4 13:22:20.121: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-4214 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Feb 4 13:22:20.197: INFO: stderr: "" Feb 4 13:22:20.197: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 4 13:22:20.198: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4214" for this suite. Feb 4 13:22:26.242: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 13:22:26.394: INFO: namespace kubectl-4214 deletion completed in 6.192097965s • [SLOW TEST:15.684 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 4 13:22:26.395: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Feb 4 13:22:26.499: INFO: Waiting up to 5m0s for pod "downwardapi-volume-044170ee-5666-495f-8b02-bf84edffaf38" in namespace "downward-api-9584" to be "success or failure" Feb 4 13:22:26.551: INFO: Pod "downwardapi-volume-044170ee-5666-495f-8b02-bf84edffaf38": Phase="Pending", Reason="", readiness=false. Elapsed: 52.117696ms Feb 4 13:22:28.568: INFO: Pod "downwardapi-volume-044170ee-5666-495f-8b02-bf84edffaf38": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068840561s Feb 4 13:22:30.589: INFO: Pod "downwardapi-volume-044170ee-5666-495f-8b02-bf84edffaf38": Phase="Pending", Reason="", readiness=false. Elapsed: 4.089357948s Feb 4 13:22:32.605: INFO: Pod "downwardapi-volume-044170ee-5666-495f-8b02-bf84edffaf38": Phase="Pending", Reason="", readiness=false. Elapsed: 6.105495197s Feb 4 13:22:34.636: INFO: Pod "downwardapi-volume-044170ee-5666-495f-8b02-bf84edffaf38": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.136993517s STEP: Saw pod success Feb 4 13:22:34.637: INFO: Pod "downwardapi-volume-044170ee-5666-495f-8b02-bf84edffaf38" satisfied condition "success or failure" Feb 4 13:22:34.648: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-044170ee-5666-495f-8b02-bf84edffaf38 container client-container: STEP: delete the pod Feb 4 13:22:34.966: INFO: Waiting for pod downwardapi-volume-044170ee-5666-495f-8b02-bf84edffaf38 to disappear Feb 4 13:22:35.024: INFO: Pod downwardapi-volume-044170ee-5666-495f-8b02-bf84edffaf38 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 4 13:22:35.025: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9584" for this suite. Feb 4 13:22:41.147: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 13:22:41.306: INFO: namespace downward-api-9584 deletion completed in 6.254190699s • [SLOW TEST:14.912 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 4 13:22:41.307: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1516 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Feb 4 13:22:41.395: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-5560' Feb 4 13:22:41.598: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Feb 4 13:22:41.598: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created Feb 4 13:22:41.690: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller Feb 4 13:22:41.698: INFO: scanned /root for discovery docs: Feb 4 13:22:41.698: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-5560' Feb 4 13:23:04.989: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Feb 4 13:23:04.989: INFO: stdout: "Created e2e-test-nginx-rc-eb6cc7cf0a472e74f816fee3b4896d66\nScaling up e2e-test-nginx-rc-eb6cc7cf0a472e74f816fee3b4896d66 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-eb6cc7cf0a472e74f816fee3b4896d66 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-eb6cc7cf0a472e74f816fee3b4896d66 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" Feb 4 13:23:04.989: INFO: stdout: "Created e2e-test-nginx-rc-eb6cc7cf0a472e74f816fee3b4896d66\nScaling up e2e-test-nginx-rc-eb6cc7cf0a472e74f816fee3b4896d66 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-eb6cc7cf0a472e74f816fee3b4896d66 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-eb6cc7cf0a472e74f816fee3b4896d66 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up. Feb 4 13:23:04.990: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-5560' Feb 4 13:23:05.102: INFO: stderr: "" Feb 4 13:23:05.102: INFO: stdout: "e2e-test-nginx-rc-eb6cc7cf0a472e74f816fee3b4896d66-twphh e2e-test-nginx-rc-fj9vs " STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2 Feb 4 13:23:10.102: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-5560' Feb 4 13:23:10.238: INFO: stderr: "" Feb 4 13:23:10.238: INFO: stdout: "e2e-test-nginx-rc-eb6cc7cf0a472e74f816fee3b4896d66-twphh " Feb 4 13:23:10.239: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-eb6cc7cf0a472e74f816fee3b4896d66-twphh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5560' Feb 4 13:23:10.348: INFO: stderr: "" Feb 4 13:23:10.348: INFO: stdout: "true" Feb 4 13:23:10.349: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-eb6cc7cf0a472e74f816fee3b4896d66-twphh -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5560' Feb 4 13:23:10.447: INFO: stderr: "" Feb 4 13:23:10.447: INFO: stdout: "docker.io/library/nginx:1.14-alpine" Feb 4 13:23:10.447: INFO: e2e-test-nginx-rc-eb6cc7cf0a472e74f816fee3b4896d66-twphh is verified up and running [AfterEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1522 Feb 4 13:23:10.447: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-5560' Feb 4 13:23:10.585: INFO: stderr: "" Feb 4 13:23:10.585: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 4 13:23:10.585: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5560" for this suite. Feb 4 13:23:32.613: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 13:23:32.701: INFO: namespace kubectl-5560 deletion completed in 22.108770362s • [SLOW TEST:51.394 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 4 13:23:32.701: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 4 13:23:39.069: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-2914" for this suite. Feb 4 13:23:45.105: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 13:23:45.303: INFO: namespace namespaces-2914 deletion completed in 6.228325077s STEP: Destroying namespace "nsdeletetest-2411" for this suite. Feb 4 13:23:45.307: INFO: Namespace nsdeletetest-2411 was already deleted STEP: Destroying namespace "nsdeletetest-1881" for this suite. Feb 4 13:23:51.331: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 13:23:51.483: INFO: namespace nsdeletetest-1881 deletion completed in 6.175155362s • [SLOW TEST:18.782 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 4 13:23:51.483: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating Redis RC Feb 4 13:23:51.578: INFO: namespace kubectl-4672 Feb 4 13:23:51.578: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4672' Feb 4 13:23:51.898: INFO: stderr: "" Feb 4 13:23:51.899: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Feb 4 13:23:52.909: INFO: Selector matched 1 pods for map[app:redis] Feb 4 13:23:52.909: INFO: Found 0 / 1 Feb 4 13:23:53.932: INFO: Selector matched 1 pods for map[app:redis] Feb 4 13:23:53.932: INFO: Found 0 / 1 Feb 4 13:23:54.920: INFO: Selector matched 1 pods for map[app:redis] Feb 4 13:23:54.920: INFO: Found 0 / 1 Feb 4 13:23:55.913: INFO: Selector matched 1 pods for map[app:redis] Feb 4 13:23:55.913: INFO: Found 0 / 1 Feb 4 13:23:56.912: INFO: Selector matched 1 pods for map[app:redis] Feb 4 13:23:56.913: INFO: Found 0 / 1 Feb 4 13:23:57.908: INFO: Selector matched 1 pods for map[app:redis] Feb 4 13:23:57.909: INFO: Found 0 / 1 Feb 4 13:23:58.914: INFO: Selector matched 1 pods for map[app:redis] Feb 4 13:23:58.914: INFO: Found 0 / 1 Feb 4 13:23:59.909: INFO: Selector matched 1 pods for map[app:redis] Feb 4 13:23:59.909: INFO: Found 0 / 1 Feb 4 13:24:00.912: INFO: Selector matched 1 pods for map[app:redis] Feb 4 13:24:00.912: INFO: Found 1 / 1 Feb 4 13:24:00.912: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Feb 4 13:24:00.918: INFO: Selector matched 1 pods for map[app:redis] Feb 4 13:24:00.918: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Feb 4 13:24:00.918: INFO: wait on redis-master startup in kubectl-4672 Feb 4 13:24:00.918: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-8cr6k redis-master --namespace=kubectl-4672' Feb 4 13:24:01.116: INFO: stderr: "" Feb 4 13:24:01.116: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 04 Feb 13:23:58.837 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 04 Feb 13:23:58.837 # Server started, Redis version 3.2.12\n1:M 04 Feb 13:23:58.838 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 04 Feb 13:23:58.838 * The server is now ready to accept connections on port 6379\n" STEP: exposing RC Feb 4 13:24:01.116: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-4672' Feb 4 13:24:01.380: INFO: stderr: "" Feb 4 13:24:01.380: INFO: stdout: "service/rm2 exposed\n" Feb 4 13:24:01.413: INFO: Service rm2 in namespace kubectl-4672 found. STEP: exposing service Feb 4 13:24:03.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-4672' Feb 4 13:24:03.763: INFO: stderr: "" Feb 4 13:24:03.763: INFO: stdout: "service/rm3 exposed\n" Feb 4 13:24:03.773: INFO: Service rm3 in namespace kubectl-4672 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 4 13:24:05.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4672" for this suite. Feb 4 13:24:27.878: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 13:24:27.996: INFO: namespace kubectl-4672 deletion completed in 22.188926498s • [SLOW TEST:36.513 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 4 13:24:27.996: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Feb 4 13:24:28.104: INFO: Waiting up to 5m0s for pod "downward-api-6ab379f7-ab22-4700-b8e7-b567cccb49aa" in namespace "downward-api-7195" to be "success or failure" Feb 4 13:24:28.134: INFO: Pod "downward-api-6ab379f7-ab22-4700-b8e7-b567cccb49aa": Phase="Pending", Reason="", readiness=false. Elapsed: 30.371698ms Feb 4 13:24:30.146: INFO: Pod "downward-api-6ab379f7-ab22-4700-b8e7-b567cccb49aa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042273675s Feb 4 13:24:32.155: INFO: Pod "downward-api-6ab379f7-ab22-4700-b8e7-b567cccb49aa": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050946372s Feb 4 13:24:34.162: INFO: Pod "downward-api-6ab379f7-ab22-4700-b8e7-b567cccb49aa": Phase="Pending", Reason="", readiness=false. Elapsed: 6.058415161s Feb 4 13:24:36.173: INFO: Pod "downward-api-6ab379f7-ab22-4700-b8e7-b567cccb49aa": Phase="Pending", Reason="", readiness=false. Elapsed: 8.069534173s Feb 4 13:24:38.182: INFO: Pod "downward-api-6ab379f7-ab22-4700-b8e7-b567cccb49aa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.078073994s STEP: Saw pod success Feb 4 13:24:38.182: INFO: Pod "downward-api-6ab379f7-ab22-4700-b8e7-b567cccb49aa" satisfied condition "success or failure" Feb 4 13:24:38.185: INFO: Trying to get logs from node iruya-node pod downward-api-6ab379f7-ab22-4700-b8e7-b567cccb49aa container dapi-container: STEP: delete the pod Feb 4 13:24:38.362: INFO: Waiting for pod downward-api-6ab379f7-ab22-4700-b8e7-b567cccb49aa to disappear Feb 4 13:24:38.375: INFO: Pod downward-api-6ab379f7-ab22-4700-b8e7-b567cccb49aa no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 4 13:24:38.375: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7195" for this suite. Feb 4 13:24:44.437: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 13:24:44.684: INFO: namespace downward-api-7195 deletion completed in 6.294784371s • [SLOW TEST:16.688 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 4 13:24:44.684: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Feb 4 13:24:44.814: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7396ca5a-4a8e-41aa-9607-4717e28e4e25" in namespace "downward-api-4454" to be "success or failure" Feb 4 13:24:44.821: INFO: Pod "downwardapi-volume-7396ca5a-4a8e-41aa-9607-4717e28e4e25": Phase="Pending", Reason="", readiness=false. Elapsed: 6.586018ms Feb 4 13:24:46.835: INFO: Pod "downwardapi-volume-7396ca5a-4a8e-41aa-9607-4717e28e4e25": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020211611s Feb 4 13:24:48.850: INFO: Pod "downwardapi-volume-7396ca5a-4a8e-41aa-9607-4717e28e4e25": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035724918s Feb 4 13:24:50.871: INFO: Pod "downwardapi-volume-7396ca5a-4a8e-41aa-9607-4717e28e4e25": Phase="Pending", Reason="", readiness=false. Elapsed: 6.056405247s Feb 4 13:24:52.891: INFO: Pod "downwardapi-volume-7396ca5a-4a8e-41aa-9607-4717e28e4e25": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.076820888s STEP: Saw pod success Feb 4 13:24:52.891: INFO: Pod "downwardapi-volume-7396ca5a-4a8e-41aa-9607-4717e28e4e25" satisfied condition "success or failure" Feb 4 13:24:52.898: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-7396ca5a-4a8e-41aa-9607-4717e28e4e25 container client-container: STEP: delete the pod Feb 4 13:24:53.027: INFO: Waiting for pod downwardapi-volume-7396ca5a-4a8e-41aa-9607-4717e28e4e25 to disappear Feb 4 13:24:53.037: INFO: Pod downwardapi-volume-7396ca5a-4a8e-41aa-9607-4717e28e4e25 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 4 13:24:53.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4454" for this suite. Feb 4 13:24:59.075: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 13:24:59.175: INFO: namespace downward-api-4454 deletion completed in 6.128810003s • [SLOW TEST:14.491 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 4 13:24:59.176: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Feb 4 13:24:59.287: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-9070,SelfLink:/api/v1/namespaces/watch-9070/configmaps/e2e-watch-test-label-changed,UID:d9d907ce-124b-4235-950e-50994a19f389,ResourceVersion:23067707,Generation:0,CreationTimestamp:2020-02-04 13:24:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Feb 4 13:24:59.288: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-9070,SelfLink:/api/v1/namespaces/watch-9070/configmaps/e2e-watch-test-label-changed,UID:d9d907ce-124b-4235-950e-50994a19f389,ResourceVersion:23067708,Generation:0,CreationTimestamp:2020-02-04 13:24:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Feb 4 13:24:59.288: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-9070,SelfLink:/api/v1/namespaces/watch-9070/configmaps/e2e-watch-test-label-changed,UID:d9d907ce-124b-4235-950e-50994a19f389,ResourceVersion:23067709,Generation:0,CreationTimestamp:2020-02-04 13:24:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Feb 4 13:25:09.385: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-9070,SelfLink:/api/v1/namespaces/watch-9070/configmaps/e2e-watch-test-label-changed,UID:d9d907ce-124b-4235-950e-50994a19f389,ResourceVersion:23067724,Generation:0,CreationTimestamp:2020-02-04 13:24:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Feb 4 13:25:09.386: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-9070,SelfLink:/api/v1/namespaces/watch-9070/configmaps/e2e-watch-test-label-changed,UID:d9d907ce-124b-4235-950e-50994a19f389,ResourceVersion:23067725,Generation:0,CreationTimestamp:2020-02-04 13:24:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} Feb 4 13:25:09.386: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-9070,SelfLink:/api/v1/namespaces/watch-9070/configmaps/e2e-watch-test-label-changed,UID:d9d907ce-124b-4235-950e-50994a19f389,ResourceVersion:23067726,Generation:0,CreationTimestamp:2020-02-04 13:24:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 4 13:25:09.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-9070" for this suite. Feb 4 13:25:15.553: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 13:25:15.718: INFO: namespace watch-9070 deletion completed in 6.317174933s • [SLOW TEST:16.543 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 4 13:25:15.719: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Feb 4 13:25:24.467: INFO: Successfully updated pod "annotationupdate1839286d-3a88-4120-92b1-75631b4ae160" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 4 13:25:26.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5227" for this suite. Feb 4 13:25:48.583: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 13:25:48.734: INFO: namespace downward-api-5227 deletion completed in 22.179167514s • [SLOW TEST:33.014 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 4 13:25:48.734: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Feb 4 13:25:59.034: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 4 13:25:59.073: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-8084" for this suite. Feb 4 13:26:05.107: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 13:26:05.220: INFO: namespace container-runtime-8084 deletion completed in 6.142492331s • [SLOW TEST:16.486 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 4 13:26:05.220: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Feb 4 13:26:15.409: INFO: Waiting up to 5m0s for pod "client-envvars-21745372-d18c-406a-8b60-e9cb8da28fb6" in namespace "pods-5354" to be "success or failure" Feb 4 13:26:15.430: INFO: Pod "client-envvars-21745372-d18c-406a-8b60-e9cb8da28fb6": Phase="Pending", Reason="", readiness=false. Elapsed: 20.651688ms Feb 4 13:26:17.441: INFO: Pod "client-envvars-21745372-d18c-406a-8b60-e9cb8da28fb6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031150182s Feb 4 13:26:19.491: INFO: Pod "client-envvars-21745372-d18c-406a-8b60-e9cb8da28fb6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.081428982s Feb 4 13:26:21.501: INFO: Pod "client-envvars-21745372-d18c-406a-8b60-e9cb8da28fb6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.091875525s Feb 4 13:26:23.510: INFO: Pod "client-envvars-21745372-d18c-406a-8b60-e9cb8da28fb6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.100880118s Feb 4 13:26:25.517: INFO: Pod "client-envvars-21745372-d18c-406a-8b60-e9cb8da28fb6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.107862668s STEP: Saw pod success Feb 4 13:26:25.517: INFO: Pod "client-envvars-21745372-d18c-406a-8b60-e9cb8da28fb6" satisfied condition "success or failure" Feb 4 13:26:25.523: INFO: Trying to get logs from node iruya-node pod client-envvars-21745372-d18c-406a-8b60-e9cb8da28fb6 container env3cont: STEP: delete the pod Feb 4 13:26:25.680: INFO: Waiting for pod client-envvars-21745372-d18c-406a-8b60-e9cb8da28fb6 to disappear Feb 4 13:26:25.769: INFO: Pod client-envvars-21745372-d18c-406a-8b60-e9cb8da28fb6 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 4 13:26:25.770: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5354" for this suite. Feb 4 13:27:27.815: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 13:27:27.938: INFO: namespace pods-5354 deletion completed in 1m2.154879449s • [SLOW TEST:82.718 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 4 13:27:27.939: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-map-447ca023-5c1c-4908-a90b-eb66b65d1d47 STEP: Creating a pod to test consume secrets Feb 4 13:27:28.074: INFO: Waiting up to 5m0s for pod "pod-secrets-a735a81c-fd27-4bd1-98f4-67c35cc7d2a4" in namespace "secrets-7256" to be "success or failure" Feb 4 13:27:28.097: INFO: Pod "pod-secrets-a735a81c-fd27-4bd1-98f4-67c35cc7d2a4": Phase="Pending", Reason="", readiness=false. Elapsed: 23.306888ms Feb 4 13:27:30.108: INFO: Pod "pod-secrets-a735a81c-fd27-4bd1-98f4-67c35cc7d2a4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03433737s Feb 4 13:27:32.119: INFO: Pod "pod-secrets-a735a81c-fd27-4bd1-98f4-67c35cc7d2a4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045343767s Feb 4 13:27:34.130: INFO: Pod "pod-secrets-a735a81c-fd27-4bd1-98f4-67c35cc7d2a4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.056154293s Feb 4 13:27:36.142: INFO: Pod "pod-secrets-a735a81c-fd27-4bd1-98f4-67c35cc7d2a4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.067932687s Feb 4 13:27:38.152: INFO: Pod "pod-secrets-a735a81c-fd27-4bd1-98f4-67c35cc7d2a4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.078366942s STEP: Saw pod success Feb 4 13:27:38.153: INFO: Pod "pod-secrets-a735a81c-fd27-4bd1-98f4-67c35cc7d2a4" satisfied condition "success or failure" Feb 4 13:27:38.179: INFO: Trying to get logs from node iruya-node pod pod-secrets-a735a81c-fd27-4bd1-98f4-67c35cc7d2a4 container secret-volume-test: STEP: delete the pod Feb 4 13:27:38.231: INFO: Waiting for pod pod-secrets-a735a81c-fd27-4bd1-98f4-67c35cc7d2a4 to disappear Feb 4 13:27:38.237: INFO: Pod pod-secrets-a735a81c-fd27-4bd1-98f4-67c35cc7d2a4 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 4 13:27:38.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7256" for this suite. Feb 4 13:27:44.272: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 13:27:44.383: INFO: namespace secrets-7256 deletion completed in 6.131149364s • [SLOW TEST:16.444 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 4 13:27:44.383: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-5276 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-5276 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-5276 Feb 4 13:27:44.511: INFO: Found 0 stateful pods, waiting for 1 Feb 4 13:27:54.526: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Feb 4 13:27:54.533: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5276 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 4 13:27:55.085: INFO: stderr: "I0204 13:27:54.683327 781 log.go:172] (0xc00013af20) (0xc0005fab40) Create stream\nI0204 13:27:54.683447 781 log.go:172] (0xc00013af20) (0xc0005fab40) Stream added, broadcasting: 1\nI0204 13:27:54.688183 781 log.go:172] (0xc00013af20) Reply frame received for 1\nI0204 13:27:54.688230 781 log.go:172] (0xc00013af20) (0xc000986000) Create stream\nI0204 13:27:54.688251 781 log.go:172] (0xc00013af20) (0xc000986000) Stream added, broadcasting: 3\nI0204 13:27:54.690016 781 log.go:172] (0xc00013af20) Reply frame received for 3\nI0204 13:27:54.690052 781 log.go:172] (0xc00013af20) (0xc0005fabe0) Create stream\nI0204 13:27:54.690061 781 log.go:172] (0xc00013af20) (0xc0005fabe0) Stream added, broadcasting: 5\nI0204 13:27:54.692743 781 log.go:172] (0xc00013af20) Reply frame received for 5\nI0204 13:27:54.845606 781 log.go:172] (0xc00013af20) Data frame received for 5\nI0204 13:27:54.845638 781 log.go:172] (0xc0005fabe0) (5) Data frame handling\nI0204 13:27:54.845651 781 log.go:172] (0xc0005fabe0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0204 13:27:54.919504 781 log.go:172] (0xc00013af20) Data frame received for 3\nI0204 13:27:54.919693 781 log.go:172] (0xc000986000) (3) Data frame handling\nI0204 13:27:54.919741 781 log.go:172] (0xc000986000) (3) Data frame sent\nI0204 13:27:55.078709 781 log.go:172] (0xc00013af20) Data frame received for 1\nI0204 13:27:55.078830 781 log.go:172] (0xc0005fab40) (1) Data frame handling\nI0204 13:27:55.078876 781 log.go:172] (0xc0005fab40) (1) Data frame sent\nI0204 13:27:55.078886 781 log.go:172] (0xc00013af20) (0xc0005fab40) Stream removed, broadcasting: 1\nI0204 13:27:55.079572 781 log.go:172] (0xc00013af20) (0xc000986000) Stream removed, broadcasting: 3\nI0204 13:27:55.079608 781 log.go:172] (0xc00013af20) (0xc0005fabe0) Stream removed, broadcasting: 5\nI0204 13:27:55.079633 781 log.go:172] (0xc00013af20) (0xc0005fab40) Stream removed, broadcasting: 1\nI0204 13:27:55.079650 781 log.go:172] (0xc00013af20) (0xc000986000) Stream removed, broadcasting: 3\nI0204 13:27:55.079677 781 log.go:172] (0xc00013af20) (0xc0005fabe0) Stream removed, broadcasting: 5\nI0204 13:27:55.079726 781 log.go:172] (0xc00013af20) Go away received\n" Feb 4 13:27:55.086: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 4 13:27:55.086: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Feb 4 13:27:55.095: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Feb 4 13:28:05.104: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Feb 4 13:28:05.104: INFO: Waiting for statefulset status.replicas updated to 0 Feb 4 13:28:05.136: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999707s Feb 4 13:28:06.151: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.988447414s Feb 4 13:28:07.160: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.973543365s Feb 4 13:28:08.171: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.965398386s Feb 4 13:28:09.180: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.9540412s Feb 4 13:28:10.190: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.945209225s Feb 4 13:28:11.215: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.934706768s Feb 4 13:28:12.228: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.909520519s Feb 4 13:28:13.247: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.897341225s Feb 4 13:28:14.256: INFO: Verifying statefulset ss doesn't scale past 1 for another 878.225432ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-5276 Feb 4 13:28:15.267: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5276 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 4 13:28:18.086: INFO: stderr: "I0204 13:28:17.673102 801 log.go:172] (0xc000132790) (0xc000742640) Create stream\nI0204 13:28:17.673202 801 log.go:172] (0xc000132790) (0xc000742640) Stream added, broadcasting: 1\nI0204 13:28:17.681183 801 log.go:172] (0xc000132790) Reply frame received for 1\nI0204 13:28:17.681298 801 log.go:172] (0xc000132790) (0xc00069a0a0) Create stream\nI0204 13:28:17.681320 801 log.go:172] (0xc000132790) (0xc00069a0a0) Stream added, broadcasting: 3\nI0204 13:28:17.683973 801 log.go:172] (0xc000132790) Reply frame received for 3\nI0204 13:28:17.684035 801 log.go:172] (0xc000132790) (0xc0007426e0) Create stream\nI0204 13:28:17.684043 801 log.go:172] (0xc000132790) (0xc0007426e0) Stream added, broadcasting: 5\nI0204 13:28:17.686300 801 log.go:172] (0xc000132790) Reply frame received for 5\nI0204 13:28:17.845669 801 log.go:172] (0xc000132790) Data frame received for 3\nI0204 13:28:17.845775 801 log.go:172] (0xc00069a0a0) (3) Data frame handling\nI0204 13:28:17.845823 801 log.go:172] (0xc00069a0a0) (3) Data frame sent\nI0204 13:28:17.845910 801 log.go:172] (0xc000132790) Data frame received for 5\nI0204 13:28:17.845926 801 log.go:172] (0xc0007426e0) (5) Data frame handling\nI0204 13:28:17.845948 801 log.go:172] (0xc0007426e0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0204 13:28:18.075580 801 log.go:172] (0xc000132790) Data frame received for 1\nI0204 13:28:18.075636 801 log.go:172] (0xc000742640) (1) Data frame handling\nI0204 13:28:18.075660 801 log.go:172] (0xc000742640) (1) Data frame sent\nI0204 13:28:18.075686 801 log.go:172] (0xc000132790) (0xc00069a0a0) Stream removed, broadcasting: 3\nI0204 13:28:18.075829 801 log.go:172] (0xc000132790) (0xc000742640) Stream removed, broadcasting: 1\nI0204 13:28:18.076587 801 log.go:172] (0xc000132790) (0xc0007426e0) Stream removed, broadcasting: 5\nI0204 13:28:18.076621 801 log.go:172] (0xc000132790) Go away received\nI0204 13:28:18.076850 801 log.go:172] (0xc000132790) (0xc000742640) Stream removed, broadcasting: 1\nI0204 13:28:18.076868 801 log.go:172] (0xc000132790) (0xc00069a0a0) Stream removed, broadcasting: 3\nI0204 13:28:18.076878 801 log.go:172] (0xc000132790) (0xc0007426e0) Stream removed, broadcasting: 5\n" Feb 4 13:28:18.086: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Feb 4 13:28:18.086: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Feb 4 13:28:18.094: INFO: Found 1 stateful pods, waiting for 3 Feb 4 13:28:28.124: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Feb 4 13:28:28.124: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Feb 4 13:28:28.124: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false Feb 4 13:28:38.105: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Feb 4 13:28:38.105: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Feb 4 13:28:38.105: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Feb 4 13:28:38.118: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5276 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 4 13:28:38.890: INFO: stderr: "I0204 13:28:38.378777 832 log.go:172] (0xc000962370) (0xc00099e820) Create stream\nI0204 13:28:38.379033 832 log.go:172] (0xc000962370) (0xc00099e820) Stream added, broadcasting: 1\nI0204 13:28:38.385463 832 log.go:172] (0xc000962370) Reply frame received for 1\nI0204 13:28:38.385500 832 log.go:172] (0xc000962370) (0xc00099e8c0) Create stream\nI0204 13:28:38.385512 832 log.go:172] (0xc000962370) (0xc00099e8c0) Stream added, broadcasting: 3\nI0204 13:28:38.387684 832 log.go:172] (0xc000962370) Reply frame received for 3\nI0204 13:28:38.387722 832 log.go:172] (0xc000962370) (0xc0005f4280) Create stream\nI0204 13:28:38.387751 832 log.go:172] (0xc000962370) (0xc0005f4280) Stream added, broadcasting: 5\nI0204 13:28:38.390479 832 log.go:172] (0xc000962370) Reply frame received for 5\nI0204 13:28:38.604363 832 log.go:172] (0xc000962370) Data frame received for 5\nI0204 13:28:38.604407 832 log.go:172] (0xc0005f4280) (5) Data frame handling\nI0204 13:28:38.604433 832 log.go:172] (0xc000962370) Data frame received for 3\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0204 13:28:38.604470 832 log.go:172] (0xc00099e8c0) (3) Data frame handling\nI0204 13:28:38.604484 832 log.go:172] (0xc00099e8c0) (3) Data frame sent\nI0204 13:28:38.604591 832 log.go:172] (0xc0005f4280) (5) Data frame sent\nI0204 13:28:38.877114 832 log.go:172] (0xc000962370) Data frame received for 1\nI0204 13:28:38.877194 832 log.go:172] (0xc00099e820) (1) Data frame handling\nI0204 13:28:38.877223 832 log.go:172] (0xc00099e820) (1) Data frame sent\nI0204 13:28:38.877239 832 log.go:172] (0xc000962370) (0xc00099e820) Stream removed, broadcasting: 1\nI0204 13:28:38.877682 832 log.go:172] (0xc000962370) (0xc0005f4280) Stream removed, broadcasting: 5\nI0204 13:28:38.877769 832 log.go:172] (0xc000962370) (0xc00099e8c0) Stream removed, broadcasting: 3\nI0204 13:28:38.877832 832 log.go:172] (0xc000962370) (0xc00099e820) Stream removed, broadcasting: 1\nI0204 13:28:38.877845 832 log.go:172] (0xc000962370) (0xc00099e8c0) Stream removed, broadcasting: 3\nI0204 13:28:38.877858 832 log.go:172] (0xc000962370) (0xc0005f4280) Stream removed, broadcasting: 5\nI0204 13:28:38.878408 832 log.go:172] (0xc000962370) Go away received\n" Feb 4 13:28:38.890: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 4 13:28:38.890: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Feb 4 13:28:38.890: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5276 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 4 13:28:39.335: INFO: stderr: "I0204 13:28:39.018447 852 log.go:172] (0xc000736160) (0xc000652460) Create stream\nI0204 13:28:39.018532 852 log.go:172] (0xc000736160) (0xc000652460) Stream added, broadcasting: 1\nI0204 13:28:39.021910 852 log.go:172] (0xc000736160) Reply frame received for 1\nI0204 13:28:39.021960 852 log.go:172] (0xc000736160) (0xc00003ba40) Create stream\nI0204 13:28:39.021967 852 log.go:172] (0xc000736160) (0xc00003ba40) Stream added, broadcasting: 3\nI0204 13:28:39.023509 852 log.go:172] (0xc000736160) Reply frame received for 3\nI0204 13:28:39.023526 852 log.go:172] (0xc000736160) (0xc000652500) Create stream\nI0204 13:28:39.023531 852 log.go:172] (0xc000736160) (0xc000652500) Stream added, broadcasting: 5\nI0204 13:28:39.024344 852 log.go:172] (0xc000736160) Reply frame received for 5\nI0204 13:28:39.160815 852 log.go:172] (0xc000736160) Data frame received for 5\nI0204 13:28:39.160849 852 log.go:172] (0xc000652500) (5) Data frame handling\nI0204 13:28:39.160866 852 log.go:172] (0xc000652500) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0204 13:28:39.236594 852 log.go:172] (0xc000736160) Data frame received for 3\nI0204 13:28:39.236622 852 log.go:172] (0xc00003ba40) (3) Data frame handling\nI0204 13:28:39.236641 852 log.go:172] (0xc00003ba40) (3) Data frame sent\nI0204 13:28:39.322703 852 log.go:172] (0xc000736160) (0xc00003ba40) Stream removed, broadcasting: 3\nI0204 13:28:39.323037 852 log.go:172] (0xc000736160) Data frame received for 1\nI0204 13:28:39.323075 852 log.go:172] (0xc000652460) (1) Data frame handling\nI0204 13:28:39.323096 852 log.go:172] (0xc000736160) (0xc000652500) Stream removed, broadcasting: 5\nI0204 13:28:39.323157 852 log.go:172] (0xc000652460) (1) Data frame sent\nI0204 13:28:39.323239 852 log.go:172] (0xc000736160) (0xc000652460) Stream removed, broadcasting: 1\nI0204 13:28:39.323306 852 log.go:172] (0xc000736160) Go away received\nI0204 13:28:39.324214 852 log.go:172] (0xc000736160) (0xc000652460) Stream removed, broadcasting: 1\nI0204 13:28:39.324244 852 log.go:172] (0xc000736160) (0xc00003ba40) Stream removed, broadcasting: 3\nI0204 13:28:39.324257 852 log.go:172] (0xc000736160) (0xc000652500) Stream removed, broadcasting: 5\n" Feb 4 13:28:39.336: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 4 13:28:39.336: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Feb 4 13:28:39.336: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5276 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 4 13:28:39.846: INFO: stderr: "I0204 13:28:39.526964 870 log.go:172] (0xc000966370) (0xc0008c26e0) Create stream\nI0204 13:28:39.527094 870 log.go:172] (0xc000966370) (0xc0008c26e0) Stream added, broadcasting: 1\nI0204 13:28:39.532840 870 log.go:172] (0xc000966370) Reply frame received for 1\nI0204 13:28:39.532867 870 log.go:172] (0xc000966370) (0xc0000da320) Create stream\nI0204 13:28:39.532874 870 log.go:172] (0xc000966370) (0xc0000da320) Stream added, broadcasting: 3\nI0204 13:28:39.535348 870 log.go:172] (0xc000966370) Reply frame received for 3\nI0204 13:28:39.535396 870 log.go:172] (0xc000966370) (0xc000878000) Create stream\nI0204 13:28:39.535426 870 log.go:172] (0xc000966370) (0xc000878000) Stream added, broadcasting: 5\nI0204 13:28:39.537165 870 log.go:172] (0xc000966370) Reply frame received for 5\nI0204 13:28:39.670117 870 log.go:172] (0xc000966370) Data frame received for 5\nI0204 13:28:39.670203 870 log.go:172] (0xc000878000) (5) Data frame handling\nI0204 13:28:39.670250 870 log.go:172] (0xc000878000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0204 13:28:39.704545 870 log.go:172] (0xc000966370) Data frame received for 3\nI0204 13:28:39.704576 870 log.go:172] (0xc0000da320) (3) Data frame handling\nI0204 13:28:39.704594 870 log.go:172] (0xc0000da320) (3) Data frame sent\nI0204 13:28:39.837434 870 log.go:172] (0xc000966370) Data frame received for 1\nI0204 13:28:39.837795 870 log.go:172] (0xc000966370) (0xc0000da320) Stream removed, broadcasting: 3\nI0204 13:28:39.837830 870 log.go:172] (0xc0008c26e0) (1) Data frame handling\nI0204 13:28:39.837849 870 log.go:172] (0xc0008c26e0) (1) Data frame sent\nI0204 13:28:39.837884 870 log.go:172] (0xc000966370) (0xc000878000) Stream removed, broadcasting: 5\nI0204 13:28:39.837950 870 log.go:172] (0xc000966370) (0xc0008c26e0) Stream removed, broadcasting: 1\nI0204 13:28:39.837964 870 log.go:172] (0xc000966370) Go away received\nI0204 13:28:39.838330 870 log.go:172] (0xc000966370) (0xc0008c26e0) Stream removed, broadcasting: 1\nI0204 13:28:39.838347 870 log.go:172] (0xc000966370) (0xc0000da320) Stream removed, broadcasting: 3\nI0204 13:28:39.838355 870 log.go:172] (0xc000966370) (0xc000878000) Stream removed, broadcasting: 5\n" Feb 4 13:28:39.847: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 4 13:28:39.847: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Feb 4 13:28:39.847: INFO: Waiting for statefulset status.replicas updated to 0 Feb 4 13:28:39.861: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Feb 4 13:28:49.896: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Feb 4 13:28:49.897: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Feb 4 13:28:49.897: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Feb 4 13:28:49.922: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999427s Feb 4 13:28:50.972: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.990371365s Feb 4 13:28:51.981: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.940725274s Feb 4 13:28:52.993: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.931855151s Feb 4 13:28:54.009: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.919618823s Feb 4 13:28:55.029: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.903173678s Feb 4 13:28:56.037: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.884020845s Feb 4 13:28:57.056: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.875464325s Feb 4 13:28:58.071: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.856674491s Feb 4 13:28:59.084: INFO: Verifying statefulset ss doesn't scale past 3 for another 841.959578ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-5276 Feb 4 13:29:00.142: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5276 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 4 13:29:00.878: INFO: stderr: "I0204 13:29:00.290423 891 log.go:172] (0xc000aaa370) (0xc000788780) Create stream\nI0204 13:29:00.291196 891 log.go:172] (0xc000aaa370) (0xc000788780) Stream added, broadcasting: 1\nI0204 13:29:00.297376 891 log.go:172] (0xc000aaa370) Reply frame received for 1\nI0204 13:29:00.297472 891 log.go:172] (0xc000aaa370) (0xc000a6e000) Create stream\nI0204 13:29:00.297505 891 log.go:172] (0xc000aaa370) (0xc000a6e000) Stream added, broadcasting: 3\nI0204 13:29:00.304369 891 log.go:172] (0xc000aaa370) Reply frame received for 3\nI0204 13:29:00.304426 891 log.go:172] (0xc000aaa370) (0xc0006341e0) Create stream\nI0204 13:29:00.304449 891 log.go:172] (0xc000aaa370) (0xc0006341e0) Stream added, broadcasting: 5\nI0204 13:29:00.307601 891 log.go:172] (0xc000aaa370) Reply frame received for 5\nI0204 13:29:00.485086 891 log.go:172] (0xc000aaa370) Data frame received for 3\nI0204 13:29:00.485164 891 log.go:172] (0xc000a6e000) (3) Data frame handling\nI0204 13:29:00.485190 891 log.go:172] (0xc000a6e000) (3) Data frame sent\nI0204 13:29:00.486075 891 log.go:172] (0xc000aaa370) Data frame received for 5\nI0204 13:29:00.486092 891 log.go:172] (0xc0006341e0) (5) Data frame handling\nI0204 13:29:00.486109 891 log.go:172] (0xc0006341e0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0204 13:29:00.861758 891 log.go:172] (0xc000aaa370) (0xc000a6e000) Stream removed, broadcasting: 3\nI0204 13:29:00.861922 891 log.go:172] (0xc000aaa370) Data frame received for 1\nI0204 13:29:00.861971 891 log.go:172] (0xc000aaa370) (0xc0006341e0) Stream removed, broadcasting: 5\nI0204 13:29:00.862360 891 log.go:172] (0xc000788780) (1) Data frame handling\nI0204 13:29:00.862783 891 log.go:172] (0xc000788780) (1) Data frame sent\nI0204 13:29:00.862969 891 log.go:172] (0xc000aaa370) (0xc000788780) Stream removed, broadcasting: 1\nI0204 13:29:00.863030 891 log.go:172] (0xc000aaa370) Go away received\nI0204 13:29:00.865388 891 log.go:172] (0xc000aaa370) (0xc000788780) Stream removed, broadcasting: 1\nI0204 13:29:00.865472 891 log.go:172] (0xc000aaa370) (0xc000a6e000) Stream removed, broadcasting: 3\nI0204 13:29:00.865555 891 log.go:172] (0xc000aaa370) (0xc0006341e0) Stream removed, broadcasting: 5\n" Feb 4 13:29:00.878: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Feb 4 13:29:00.878: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Feb 4 13:29:00.878: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5276 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 4 13:29:01.243: INFO: stderr: "I0204 13:29:01.076566 912 log.go:172] (0xc0008f80b0) (0xc0008765a0) Create stream\nI0204 13:29:01.076628 912 log.go:172] (0xc0008f80b0) (0xc0008765a0) Stream added, broadcasting: 1\nI0204 13:29:01.094131 912 log.go:172] (0xc0008f80b0) Reply frame received for 1\nI0204 13:29:01.094200 912 log.go:172] (0xc0008f80b0) (0xc000634140) Create stream\nI0204 13:29:01.094217 912 log.go:172] (0xc0008f80b0) (0xc000634140) Stream added, broadcasting: 3\nI0204 13:29:01.096423 912 log.go:172] (0xc0008f80b0) Reply frame received for 3\nI0204 13:29:01.096496 912 log.go:172] (0xc0008f80b0) (0xc00032a000) Create stream\nI0204 13:29:01.096516 912 log.go:172] (0xc0008f80b0) (0xc00032a000) Stream added, broadcasting: 5\nI0204 13:29:01.097468 912 log.go:172] (0xc0008f80b0) Reply frame received for 5\nI0204 13:29:01.164410 912 log.go:172] (0xc0008f80b0) Data frame received for 3\nI0204 13:29:01.164454 912 log.go:172] (0xc0008f80b0) Data frame received for 5\nI0204 13:29:01.164515 912 log.go:172] (0xc00032a000) (5) Data frame handling\nI0204 13:29:01.164532 912 log.go:172] (0xc00032a000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0204 13:29:01.164576 912 log.go:172] (0xc000634140) (3) Data frame handling\nI0204 13:29:01.164592 912 log.go:172] (0xc000634140) (3) Data frame sent\nI0204 13:29:01.236275 912 log.go:172] (0xc0008f80b0) (0xc000634140) Stream removed, broadcasting: 3\nI0204 13:29:01.236766 912 log.go:172] (0xc0008f80b0) Data frame received for 1\nI0204 13:29:01.236801 912 log.go:172] (0xc0008765a0) (1) Data frame handling\nI0204 13:29:01.236848 912 log.go:172] (0xc0008765a0) (1) Data frame sent\nI0204 13:29:01.236954 912 log.go:172] (0xc0008f80b0) (0xc0008765a0) Stream removed, broadcasting: 1\nI0204 13:29:01.237139 912 log.go:172] (0xc0008f80b0) (0xc00032a000) Stream removed, broadcasting: 5\nI0204 13:29:01.237442 912 log.go:172] (0xc0008f80b0) (0xc0008765a0) Stream removed, broadcasting: 1\nI0204 13:29:01.237503 912 log.go:172] (0xc0008f80b0) (0xc000634140) Stream removed, broadcasting: 3\nI0204 13:29:01.237511 912 log.go:172] (0xc0008f80b0) (0xc00032a000) Stream removed, broadcasting: 5\nI0204 13:29:01.237610 912 log.go:172] (0xc0008f80b0) Go away received\n" Feb 4 13:29:01.244: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Feb 4 13:29:01.244: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Feb 4 13:29:01.244: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5276 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 4 13:29:01.676: INFO: stderr: "I0204 13:29:01.395985 930 log.go:172] (0xc0006e2a50) (0xc00047aa00) Create stream\nI0204 13:29:01.396074 930 log.go:172] (0xc0006e2a50) (0xc00047aa00) Stream added, broadcasting: 1\nI0204 13:29:01.402354 930 log.go:172] (0xc0006e2a50) Reply frame received for 1\nI0204 13:29:01.402380 930 log.go:172] (0xc0006e2a50) (0xc000648000) Create stream\nI0204 13:29:01.402388 930 log.go:172] (0xc0006e2a50) (0xc000648000) Stream added, broadcasting: 3\nI0204 13:29:01.403367 930 log.go:172] (0xc0006e2a50) Reply frame received for 3\nI0204 13:29:01.403386 930 log.go:172] (0xc0006e2a50) (0xc000648140) Create stream\nI0204 13:29:01.403392 930 log.go:172] (0xc0006e2a50) (0xc000648140) Stream added, broadcasting: 5\nI0204 13:29:01.404341 930 log.go:172] (0xc0006e2a50) Reply frame received for 5\nI0204 13:29:01.543562 930 log.go:172] (0xc0006e2a50) Data frame received for 5\nI0204 13:29:01.543614 930 log.go:172] (0xc000648140) (5) Data frame handling\nI0204 13:29:01.543625 930 log.go:172] (0xc000648140) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0204 13:29:01.543635 930 log.go:172] (0xc0006e2a50) Data frame received for 3\nI0204 13:29:01.543642 930 log.go:172] (0xc000648000) (3) Data frame handling\nI0204 13:29:01.543647 930 log.go:172] (0xc000648000) (3) Data frame sent\nI0204 13:29:01.670597 930 log.go:172] (0xc0006e2a50) Data frame received for 1\nI0204 13:29:01.670640 930 log.go:172] (0xc00047aa00) (1) Data frame handling\nI0204 13:29:01.670650 930 log.go:172] (0xc00047aa00) (1) Data frame sent\nI0204 13:29:01.670660 930 log.go:172] (0xc0006e2a50) (0xc00047aa00) Stream removed, broadcasting: 1\nI0204 13:29:01.671739 930 log.go:172] (0xc0006e2a50) (0xc000648000) Stream removed, broadcasting: 3\nI0204 13:29:01.671851 930 log.go:172] (0xc0006e2a50) (0xc000648140) Stream removed, broadcasting: 5\nI0204 13:29:01.671918 930 log.go:172] (0xc0006e2a50) (0xc00047aa00) Stream removed, broadcasting: 1\nI0204 13:29:01.671932 930 log.go:172] (0xc0006e2a50) (0xc000648000) Stream removed, broadcasting: 3\nI0204 13:29:01.671941 930 log.go:172] (0xc0006e2a50) (0xc000648140) Stream removed, broadcasting: 5\nI0204 13:29:01.672026 930 log.go:172] (0xc0006e2a50) Go away received\n" Feb 4 13:29:01.676: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Feb 4 13:29:01.676: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Feb 4 13:29:01.676: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Feb 4 13:29:31.702: INFO: Deleting all statefulset in ns statefulset-5276 Feb 4 13:29:31.707: INFO: Scaling statefulset ss to 0 Feb 4 13:29:31.721: INFO: Waiting for statefulset status.replicas updated to 0 Feb 4 13:29:31.724: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 4 13:29:31.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5276" for this suite. Feb 4 13:29:37.903: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 13:29:38.062: INFO: namespace statefulset-5276 deletion completed in 6.302734314s • [SLOW TEST:113.679 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 4 13:29:38.063: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod test-webserver-1ad84bd6-48fa-4f4a-a3f6-1de86f36cefb in namespace container-probe-2718 Feb 4 13:29:46.208: INFO: Started pod test-webserver-1ad84bd6-48fa-4f4a-a3f6-1de86f36cefb in namespace container-probe-2718 STEP: checking the pod's current state and verifying that restartCount is present Feb 4 13:29:46.216: INFO: Initial restart count of pod test-webserver-1ad84bd6-48fa-4f4a-a3f6-1de86f36cefb is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 4 13:33:47.984: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2718" for this suite. Feb 4 13:33:54.140: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 13:33:54.261: INFO: namespace container-probe-2718 deletion completed in 6.233748444s • [SLOW TEST:256.198 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 4 13:33:54.262: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on node default medium Feb 4 13:33:54.454: INFO: Waiting up to 5m0s for pod "pod-078a5585-8fa5-46ef-8699-b68e1f9eb396" in namespace "emptydir-337" to be "success or failure" Feb 4 13:33:54.463: INFO: Pod "pod-078a5585-8fa5-46ef-8699-b68e1f9eb396": Phase="Pending", Reason="", readiness=false. Elapsed: 8.713687ms Feb 4 13:33:56.482: INFO: Pod "pod-078a5585-8fa5-46ef-8699-b68e1f9eb396": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028605823s Feb 4 13:33:58.494: INFO: Pod "pod-078a5585-8fa5-46ef-8699-b68e1f9eb396": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040177243s Feb 4 13:34:00.511: INFO: Pod "pod-078a5585-8fa5-46ef-8699-b68e1f9eb396": Phase="Pending", Reason="", readiness=false. Elapsed: 6.057247956s Feb 4 13:34:02.526: INFO: Pod "pod-078a5585-8fa5-46ef-8699-b68e1f9eb396": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.072550063s STEP: Saw pod success Feb 4 13:34:02.527: INFO: Pod "pod-078a5585-8fa5-46ef-8699-b68e1f9eb396" satisfied condition "success or failure" Feb 4 13:34:02.531: INFO: Trying to get logs from node iruya-node pod pod-078a5585-8fa5-46ef-8699-b68e1f9eb396 container test-container: STEP: delete the pod Feb 4 13:34:02.659: INFO: Waiting for pod pod-078a5585-8fa5-46ef-8699-b68e1f9eb396 to disappear Feb 4 13:34:02.682: INFO: Pod pod-078a5585-8fa5-46ef-8699-b68e1f9eb396 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 4 13:34:02.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-337" for this suite. Feb 4 13:34:08.754: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 13:34:08.856: INFO: namespace emptydir-337 deletion completed in 6.154627055s • [SLOW TEST:14.595 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 4 13:34:08.856: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Feb 4 13:34:09.653: INFO: Pod name wrapped-volume-race-1974c381-fb09-4f24-badb-9909608e0cf3: Found 0 pods out of 5 Feb 4 13:34:14.725: INFO: Pod name wrapped-volume-race-1974c381-fb09-4f24-badb-9909608e0cf3: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-1974c381-fb09-4f24-badb-9909608e0cf3 in namespace emptydir-wrapper-3450, will wait for the garbage collector to delete the pods Feb 4 13:34:46.833: INFO: Deleting ReplicationController wrapped-volume-race-1974c381-fb09-4f24-badb-9909608e0cf3 took: 14.116717ms Feb 4 13:34:47.233: INFO: Terminating ReplicationController wrapped-volume-race-1974c381-fb09-4f24-badb-9909608e0cf3 pods took: 400.536397ms STEP: Creating RC which spawns configmap-volume pods Feb 4 13:35:36.700: INFO: Pod name wrapped-volume-race-ca298719-85e1-4e86-8cec-a76394e7dc8c: Found 0 pods out of 5 Feb 4 13:35:41.715: INFO: Pod name wrapped-volume-race-ca298719-85e1-4e86-8cec-a76394e7dc8c: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-ca298719-85e1-4e86-8cec-a76394e7dc8c in namespace emptydir-wrapper-3450, will wait for the garbage collector to delete the pods Feb 4 13:36:15.887: INFO: Deleting ReplicationController wrapped-volume-race-ca298719-85e1-4e86-8cec-a76394e7dc8c took: 18.144174ms Feb 4 13:36:16.289: INFO: Terminating ReplicationController wrapped-volume-race-ca298719-85e1-4e86-8cec-a76394e7dc8c pods took: 401.505098ms STEP: Creating RC which spawns configmap-volume pods Feb 4 13:37:07.671: INFO: Pod name wrapped-volume-race-a3400b3d-1382-41de-9745-f7d0fa1f79f5: Found 0 pods out of 5 Feb 4 13:37:12.687: INFO: Pod name wrapped-volume-race-a3400b3d-1382-41de-9745-f7d0fa1f79f5: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-a3400b3d-1382-41de-9745-f7d0fa1f79f5 in namespace emptydir-wrapper-3450, will wait for the garbage collector to delete the pods Feb 4 13:37:48.832: INFO: Deleting ReplicationController wrapped-volume-race-a3400b3d-1382-41de-9745-f7d0fa1f79f5 took: 18.350864ms Feb 4 13:37:49.133: INFO: Terminating ReplicationController wrapped-volume-race-a3400b3d-1382-41de-9745-f7d0fa1f79f5 pods took: 300.805762ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 4 13:38:38.201: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-3450" for this suite. Feb 4 13:38:48.359: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 13:38:48.503: INFO: namespace emptydir-wrapper-3450 deletion completed in 10.17706914s • [SLOW TEST:279.647 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 4 13:38:48.504: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating Redis RC Feb 4 13:38:48.595: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3780' Feb 4 13:38:50.759: INFO: stderr: "" Feb 4 13:38:50.759: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Feb 4 13:38:51.782: INFO: Selector matched 1 pods for map[app:redis] Feb 4 13:38:51.782: INFO: Found 0 / 1 Feb 4 13:38:52.774: INFO: Selector matched 1 pods for map[app:redis] Feb 4 13:38:52.774: INFO: Found 0 / 1 Feb 4 13:38:53.781: INFO: Selector matched 1 pods for map[app:redis] Feb 4 13:38:53.781: INFO: Found 0 / 1 Feb 4 13:38:54.774: INFO: Selector matched 1 pods for map[app:redis] Feb 4 13:38:54.775: INFO: Found 0 / 1 Feb 4 13:38:55.773: INFO: Selector matched 1 pods for map[app:redis] Feb 4 13:38:55.773: INFO: Found 0 / 1 Feb 4 13:38:56.767: INFO: Selector matched 1 pods for map[app:redis] Feb 4 13:38:56.767: INFO: Found 0 / 1 Feb 4 13:38:57.771: INFO: Selector matched 1 pods for map[app:redis] Feb 4 13:38:57.772: INFO: Found 0 / 1 Feb 4 13:38:58.770: INFO: Selector matched 1 pods for map[app:redis] Feb 4 13:38:58.770: INFO: Found 0 / 1 Feb 4 13:38:59.769: INFO: Selector matched 1 pods for map[app:redis] Feb 4 13:38:59.770: INFO: Found 0 / 1 Feb 4 13:39:00.770: INFO: Selector matched 1 pods for map[app:redis] Feb 4 13:39:00.770: INFO: Found 0 / 1 Feb 4 13:39:01.772: INFO: Selector matched 1 pods for map[app:redis] Feb 4 13:39:01.772: INFO: Found 0 / 1 Feb 4 13:39:02.779: INFO: Selector matched 1 pods for map[app:redis] Feb 4 13:39:02.779: INFO: Found 1 / 1 Feb 4 13:39:02.779: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Feb 4 13:39:02.787: INFO: Selector matched 1 pods for map[app:redis] Feb 4 13:39:02.787: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Feb 4 13:39:02.787: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-8ldpf --namespace=kubectl-3780 -p {"metadata":{"annotations":{"x":"y"}}}' Feb 4 13:39:02.919: INFO: stderr: "" Feb 4 13:39:02.919: INFO: stdout: "pod/redis-master-8ldpf patched\n" STEP: checking annotations Feb 4 13:39:02.999: INFO: Selector matched 1 pods for map[app:redis] Feb 4 13:39:02.999: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 4 13:39:02.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3780" for this suite. Feb 4 13:39:25.020: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 13:39:25.148: INFO: namespace kubectl-3780 deletion completed in 22.145694723s • [SLOW TEST:36.645 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 4 13:39:25.149: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap configmap-3826/configmap-test-b9b95b69-34b2-43ac-bd69-54ee900e5727 STEP: Creating a pod to test consume configMaps Feb 4 13:39:25.350: INFO: Waiting up to 5m0s for pod "pod-configmaps-8457b326-5e66-43af-9c45-3b046a39354b" in namespace "configmap-3826" to be "success or failure" Feb 4 13:39:25.449: INFO: Pod "pod-configmaps-8457b326-5e66-43af-9c45-3b046a39354b": Phase="Pending", Reason="", readiness=false. Elapsed: 98.161078ms Feb 4 13:39:27.460: INFO: Pod "pod-configmaps-8457b326-5e66-43af-9c45-3b046a39354b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.109489085s Feb 4 13:39:29.468: INFO: Pod "pod-configmaps-8457b326-5e66-43af-9c45-3b046a39354b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.117684583s Feb 4 13:39:31.514: INFO: Pod "pod-configmaps-8457b326-5e66-43af-9c45-3b046a39354b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.163642374s Feb 4 13:39:33.522: INFO: Pod "pod-configmaps-8457b326-5e66-43af-9c45-3b046a39354b": Phase="Running", Reason="", readiness=true. Elapsed: 8.171298686s Feb 4 13:39:35.557: INFO: Pod "pod-configmaps-8457b326-5e66-43af-9c45-3b046a39354b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.206576399s STEP: Saw pod success Feb 4 13:39:35.557: INFO: Pod "pod-configmaps-8457b326-5e66-43af-9c45-3b046a39354b" satisfied condition "success or failure" Feb 4 13:39:35.563: INFO: Trying to get logs from node iruya-node pod pod-configmaps-8457b326-5e66-43af-9c45-3b046a39354b container env-test: STEP: delete the pod Feb 4 13:39:35.792: INFO: Waiting for pod pod-configmaps-8457b326-5e66-43af-9c45-3b046a39354b to disappear Feb 4 13:39:35.867: INFO: Pod pod-configmaps-8457b326-5e66-43af-9c45-3b046a39354b no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 4 13:39:35.868: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3826" for this suite. Feb 4 13:39:41.914: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 13:39:42.035: INFO: namespace configmap-3826 deletion completed in 6.156525417s • [SLOW TEST:16.887 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 4 13:39:42.036: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5888.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5888.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Feb 4 13:39:54.217: INFO: Unable to read wheezy_udp@kubernetes.default.svc.cluster.local from pod dns-5888/dns-test-c7507d67-3745-4989-8215-95df72373b2c: the server could not find the requested resource (get pods dns-test-c7507d67-3745-4989-8215-95df72373b2c) Feb 4 13:39:54.222: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-5888/dns-test-c7507d67-3745-4989-8215-95df72373b2c: the server could not find the requested resource (get pods dns-test-c7507d67-3745-4989-8215-95df72373b2c) Feb 4 13:39:54.229: INFO: Unable to read wheezy_udp@PodARecord from pod dns-5888/dns-test-c7507d67-3745-4989-8215-95df72373b2c: the server could not find the requested resource (get pods dns-test-c7507d67-3745-4989-8215-95df72373b2c) Feb 4 13:39:54.236: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-5888/dns-test-c7507d67-3745-4989-8215-95df72373b2c: the server could not find the requested resource (get pods dns-test-c7507d67-3745-4989-8215-95df72373b2c) Feb 4 13:39:54.243: INFO: Unable to read jessie_udp@kubernetes.default.svc.cluster.local from pod dns-5888/dns-test-c7507d67-3745-4989-8215-95df72373b2c: the server could not find the requested resource (get pods dns-test-c7507d67-3745-4989-8215-95df72373b2c) Feb 4 13:39:54.249: INFO: Unable to read jessie_tcp@kubernetes.default.svc.cluster.local from pod dns-5888/dns-test-c7507d67-3745-4989-8215-95df72373b2c: the server could not find the requested resource (get pods dns-test-c7507d67-3745-4989-8215-95df72373b2c) Feb 4 13:39:54.255: INFO: Unable to read jessie_udp@PodARecord from pod dns-5888/dns-test-c7507d67-3745-4989-8215-95df72373b2c: the server could not find the requested resource (get pods dns-test-c7507d67-3745-4989-8215-95df72373b2c) Feb 4 13:39:54.262: INFO: Unable to read jessie_tcp@PodARecord from pod dns-5888/dns-test-c7507d67-3745-4989-8215-95df72373b2c: the server could not find the requested resource (get pods dns-test-c7507d67-3745-4989-8215-95df72373b2c) Feb 4 13:39:54.262: INFO: Lookups using dns-5888/dns-test-c7507d67-3745-4989-8215-95df72373b2c failed for: [wheezy_udp@kubernetes.default.svc.cluster.local wheezy_tcp@kubernetes.default.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@kubernetes.default.svc.cluster.local jessie_tcp@kubernetes.default.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord] Feb 4 13:39:59.353: INFO: DNS probes using dns-5888/dns-test-c7507d67-3745-4989-8215-95df72373b2c succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 4 13:39:59.446: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5888" for this suite. Feb 4 13:40:05.494: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 13:40:05.630: INFO: namespace dns-5888 deletion completed in 6.163397284s • [SLOW TEST:23.595 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 4 13:40:05.631: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 4 13:40:56.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-345" for this suite. Feb 4 13:41:02.376: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 13:41:02.531: INFO: namespace container-runtime-345 deletion completed in 6.180055613s • [SLOW TEST:56.900 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 4 13:41:02.533: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Feb 4 13:41:02.683: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Feb 4 13:41:04.172: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 4 13:41:05.930: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-1760" for this suite. Feb 4 13:41:16.078: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 13:41:16.234: INFO: namespace replication-controller-1760 deletion completed in 10.294562711s • [SLOW TEST:13.701 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 4 13:41:16.235: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on tmpfs Feb 4 13:41:16.298: INFO: Waiting up to 5m0s for pod "pod-c4c25132-3128-4719-80ae-c3d36afabac5" in namespace "emptydir-9210" to be "success or failure" Feb 4 13:41:16.340: INFO: Pod "pod-c4c25132-3128-4719-80ae-c3d36afabac5": Phase="Pending", Reason="", readiness=false. Elapsed: 41.95332ms Feb 4 13:41:18.350: INFO: Pod "pod-c4c25132-3128-4719-80ae-c3d36afabac5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052250552s Feb 4 13:41:20.362: INFO: Pod "pod-c4c25132-3128-4719-80ae-c3d36afabac5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.064247037s Feb 4 13:41:22.372: INFO: Pod "pod-c4c25132-3128-4719-80ae-c3d36afabac5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.073546464s Feb 4 13:41:24.378: INFO: Pod "pod-c4c25132-3128-4719-80ae-c3d36afabac5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.079875741s STEP: Saw pod success Feb 4 13:41:24.378: INFO: Pod "pod-c4c25132-3128-4719-80ae-c3d36afabac5" satisfied condition "success or failure" Feb 4 13:41:24.381: INFO: Trying to get logs from node iruya-node pod pod-c4c25132-3128-4719-80ae-c3d36afabac5 container test-container: STEP: delete the pod Feb 4 13:41:24.528: INFO: Waiting for pod pod-c4c25132-3128-4719-80ae-c3d36afabac5 to disappear Feb 4 13:41:24.542: INFO: Pod pod-c4c25132-3128-4719-80ae-c3d36afabac5 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 4 13:41:24.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9210" for this suite. Feb 4 13:41:30.572: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 13:41:30.659: INFO: namespace emptydir-9210 deletion completed in 6.111672977s • [SLOW TEST:14.425 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 4 13:41:30.660: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Feb 4 13:41:30.820: INFO: Pod name pod-release: Found 0 pods out of 1 Feb 4 13:41:35.835: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 4 13:41:35.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-847" for this suite. Feb 4 13:41:42.064: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 13:41:42.180: INFO: namespace replication-controller-847 deletion completed in 6.251415085s • [SLOW TEST:11.521 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 4 13:41:42.182: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-projected-all-test-volume-85ba837b-4a42-438c-a398-b8dce68fa492 STEP: Creating secret with name secret-projected-all-test-volume-e66ad611-9619-40b3-b361-67d01f619dbe STEP: Creating a pod to test Check all projections for projected volume plugin Feb 4 13:41:42.391: INFO: Waiting up to 5m0s for pod "projected-volume-d04d013d-db79-4dae-8bda-924ab4dfa81f" in namespace "projected-6802" to be "success or failure" Feb 4 13:41:42.403: INFO: Pod "projected-volume-d04d013d-db79-4dae-8bda-924ab4dfa81f": Phase="Pending", Reason="", readiness=false. Elapsed: 11.402081ms Feb 4 13:41:44.412: INFO: Pod "projected-volume-d04d013d-db79-4dae-8bda-924ab4dfa81f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020770609s Feb 4 13:41:46.912: INFO: Pod "projected-volume-d04d013d-db79-4dae-8bda-924ab4dfa81f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.520818174s Feb 4 13:41:48.936: INFO: Pod "projected-volume-d04d013d-db79-4dae-8bda-924ab4dfa81f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.545129895s Feb 4 13:41:50.954: INFO: Pod "projected-volume-d04d013d-db79-4dae-8bda-924ab4dfa81f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.563136837s Feb 4 13:41:52.968: INFO: Pod "projected-volume-d04d013d-db79-4dae-8bda-924ab4dfa81f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.577146385s Feb 4 13:41:54.977: INFO: Pod "projected-volume-d04d013d-db79-4dae-8bda-924ab4dfa81f": Phase="Pending", Reason="", readiness=false. Elapsed: 12.585817298s Feb 4 13:41:56.997: INFO: Pod "projected-volume-d04d013d-db79-4dae-8bda-924ab4dfa81f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.605991792s STEP: Saw pod success Feb 4 13:41:56.997: INFO: Pod "projected-volume-d04d013d-db79-4dae-8bda-924ab4dfa81f" satisfied condition "success or failure" Feb 4 13:41:57.004: INFO: Trying to get logs from node iruya-node pod projected-volume-d04d013d-db79-4dae-8bda-924ab4dfa81f container projected-all-volume-test: STEP: delete the pod Feb 4 13:41:57.117: INFO: Waiting for pod projected-volume-d04d013d-db79-4dae-8bda-924ab4dfa81f to disappear Feb 4 13:41:57.122: INFO: Pod projected-volume-d04d013d-db79-4dae-8bda-924ab4dfa81f no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 4 13:41:57.122: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6802" for this suite. Feb 4 13:42:03.182: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 13:42:03.300: INFO: namespace projected-6802 deletion completed in 6.168449314s • [SLOW TEST:21.118 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 4 13:42:03.301: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-5bc9d396-c147-407e-bca3-874708c82078 STEP: Creating a pod to test consume configMaps Feb 4 13:42:03.440: INFO: Waiting up to 5m0s for pod "pod-configmaps-5f777ebe-0b3a-4e68-b639-1a7d3f9fc9b9" in namespace "configmap-3874" to be "success or failure" Feb 4 13:42:03.447: INFO: Pod "pod-configmaps-5f777ebe-0b3a-4e68-b639-1a7d3f9fc9b9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.951255ms Feb 4 13:42:05.456: INFO: Pod "pod-configmaps-5f777ebe-0b3a-4e68-b639-1a7d3f9fc9b9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015628113s Feb 4 13:42:07.472: INFO: Pod "pod-configmaps-5f777ebe-0b3a-4e68-b639-1a7d3f9fc9b9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032049824s Feb 4 13:42:09.498: INFO: Pod "pod-configmaps-5f777ebe-0b3a-4e68-b639-1a7d3f9fc9b9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.057854023s Feb 4 13:42:11.516: INFO: Pod "pod-configmaps-5f777ebe-0b3a-4e68-b639-1a7d3f9fc9b9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.07593227s STEP: Saw pod success Feb 4 13:42:11.516: INFO: Pod "pod-configmaps-5f777ebe-0b3a-4e68-b639-1a7d3f9fc9b9" satisfied condition "success or failure" Feb 4 13:42:11.521: INFO: Trying to get logs from node iruya-node pod pod-configmaps-5f777ebe-0b3a-4e68-b639-1a7d3f9fc9b9 container configmap-volume-test: STEP: delete the pod Feb 4 13:42:11.573: INFO: Waiting for pod pod-configmaps-5f777ebe-0b3a-4e68-b639-1a7d3f9fc9b9 to disappear Feb 4 13:42:11.579: INFO: Pod pod-configmaps-5f777ebe-0b3a-4e68-b639-1a7d3f9fc9b9 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 4 13:42:11.580: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3874" for this suite. Feb 4 13:42:17.680: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 13:42:17.833: INFO: namespace configmap-3874 deletion completed in 6.248648099s • [SLOW TEST:14.533 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 4 13:42:17.834: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Feb 4 13:42:17.894: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Feb 4 13:42:17.915: INFO: Waiting for terminating namespaces to be deleted... Feb 4 13:42:17.919: INFO: Logging pods the kubelet thinks is on node iruya-node before test Feb 4 13:42:17.967: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded) Feb 4 13:42:17.967: INFO: Container kube-proxy ready: true, restart count 0 Feb 4 13:42:17.967: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded) Feb 4 13:42:17.967: INFO: Container weave ready: true, restart count 0 Feb 4 13:42:17.967: INFO: Container weave-npc ready: true, restart count 0 Feb 4 13:42:17.967: INFO: Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test Feb 4 13:42:17.982: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded) Feb 4 13:42:17.982: INFO: Container kube-apiserver ready: true, restart count 0 Feb 4 13:42:17.982: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded) Feb 4 13:42:17.982: INFO: Container kube-scheduler ready: true, restart count 13 Feb 4 13:42:17.982: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded) Feb 4 13:42:17.982: INFO: Container coredns ready: true, restart count 0 Feb 4 13:42:17.982: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded) Feb 4 13:42:17.982: INFO: Container coredns ready: true, restart count 0 Feb 4 13:42:17.982: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded) Feb 4 13:42:17.982: INFO: Container etcd ready: true, restart count 0 Feb 4 13:42:17.982: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded) Feb 4 13:42:17.982: INFO: Container weave ready: true, restart count 0 Feb 4 13:42:17.982: INFO: Container weave-npc ready: true, restart count 0 Feb 4 13:42:17.982: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded) Feb 4 13:42:17.982: INFO: Container kube-controller-manager ready: true, restart count 20 Feb 4 13:42:17.982: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded) Feb 4 13:42:17.982: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.15f036d06d1503d4], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 4 13:42:19.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-3665" for this suite. Feb 4 13:42:25.065: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 13:42:25.205: INFO: namespace sched-pred-3665 deletion completed in 6.168316291s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:7.371 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 4 13:42:25.206: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-a929436f-a3e7-4d77-b57b-f2d5a4eb0779 STEP: Creating a pod to test consume configMaps Feb 4 13:42:25.314: INFO: Waiting up to 5m0s for pod "pod-configmaps-b81c07c7-f41a-433f-8ce5-a9a938c06e03" in namespace "configmap-7687" to be "success or failure" Feb 4 13:42:25.421: INFO: Pod "pod-configmaps-b81c07c7-f41a-433f-8ce5-a9a938c06e03": Phase="Pending", Reason="", readiness=false. Elapsed: 107.425565ms Feb 4 13:42:27.431: INFO: Pod "pod-configmaps-b81c07c7-f41a-433f-8ce5-a9a938c06e03": Phase="Pending", Reason="", readiness=false. Elapsed: 2.117577165s Feb 4 13:42:29.455: INFO: Pod "pod-configmaps-b81c07c7-f41a-433f-8ce5-a9a938c06e03": Phase="Pending", Reason="", readiness=false. Elapsed: 4.141693795s Feb 4 13:42:31.481: INFO: Pod "pod-configmaps-b81c07c7-f41a-433f-8ce5-a9a938c06e03": Phase="Pending", Reason="", readiness=false. Elapsed: 6.167585559s Feb 4 13:42:33.491: INFO: Pod "pod-configmaps-b81c07c7-f41a-433f-8ce5-a9a938c06e03": Phase="Pending", Reason="", readiness=false. Elapsed: 8.177100521s Feb 4 13:42:35.503: INFO: Pod "pod-configmaps-b81c07c7-f41a-433f-8ce5-a9a938c06e03": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.189415851s STEP: Saw pod success Feb 4 13:42:35.503: INFO: Pod "pod-configmaps-b81c07c7-f41a-433f-8ce5-a9a938c06e03" satisfied condition "success or failure" Feb 4 13:42:35.508: INFO: Trying to get logs from node iruya-node pod pod-configmaps-b81c07c7-f41a-433f-8ce5-a9a938c06e03 container configmap-volume-test: STEP: delete the pod Feb 4 13:42:35.810: INFO: Waiting for pod pod-configmaps-b81c07c7-f41a-433f-8ce5-a9a938c06e03 to disappear Feb 4 13:42:35.827: INFO: Pod pod-configmaps-b81c07c7-f41a-433f-8ce5-a9a938c06e03 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 4 13:42:35.827: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7687" for this suite. Feb 4 13:42:41.946: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 13:42:42.081: INFO: namespace configmap-7687 deletion completed in 6.24678154s • [SLOW TEST:16.875 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 4 13:42:42.082: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-1883e4eb-2b4b-46e3-ad56-93bd94bd119d STEP: Creating a pod to test consume secrets Feb 4 13:42:42.221: INFO: Waiting up to 5m0s for pod "pod-secrets-eb74f358-4d41-4c33-8ea4-5b0af7cfeca5" in namespace "secrets-4172" to be "success or failure" Feb 4 13:42:42.246: INFO: Pod "pod-secrets-eb74f358-4d41-4c33-8ea4-5b0af7cfeca5": Phase="Pending", Reason="", readiness=false. Elapsed: 24.676456ms Feb 4 13:42:44.255: INFO: Pod "pod-secrets-eb74f358-4d41-4c33-8ea4-5b0af7cfeca5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034074315s Feb 4 13:42:46.271: INFO: Pod "pod-secrets-eb74f358-4d41-4c33-8ea4-5b0af7cfeca5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050294583s Feb 4 13:42:48.279: INFO: Pod "pod-secrets-eb74f358-4d41-4c33-8ea4-5b0af7cfeca5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.058147543s Feb 4 13:42:50.287: INFO: Pod "pod-secrets-eb74f358-4d41-4c33-8ea4-5b0af7cfeca5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.065591219s STEP: Saw pod success Feb 4 13:42:50.287: INFO: Pod "pod-secrets-eb74f358-4d41-4c33-8ea4-5b0af7cfeca5" satisfied condition "success or failure" Feb 4 13:42:50.292: INFO: Trying to get logs from node iruya-node pod pod-secrets-eb74f358-4d41-4c33-8ea4-5b0af7cfeca5 container secret-volume-test: STEP: delete the pod Feb 4 13:42:50.380: INFO: Waiting for pod pod-secrets-eb74f358-4d41-4c33-8ea4-5b0af7cfeca5 to disappear Feb 4 13:42:50.419: INFO: Pod pod-secrets-eb74f358-4d41-4c33-8ea4-5b0af7cfeca5 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 4 13:42:50.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4172" for this suite. Feb 4 13:42:56.481: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 13:42:56.630: INFO: namespace secrets-4172 deletion completed in 6.175135226s • [SLOW TEST:14.548 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 4 13:42:56.633: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap that has name configmap-test-emptyKey-0876241b-4571-4a3b-b3b7-54571f60d1c9 [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 4 13:42:56.815: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1282" for this suite. Feb 4 13:43:02.913: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 13:43:03.019: INFO: namespace configmap-1282 deletion completed in 6.196065953s • [SLOW TEST:6.386 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 4 13:43:03.019: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Feb 4 13:43:03.395: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"8e22a119-9f55-4eba-998b-a6d87ed6ce0a", Controller:(*bool)(0xc002237b1a), BlockOwnerDeletion:(*bool)(0xc002237b1b)}} Feb 4 13:43:03.490: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"b2dbf099-0757-4e08-9696-f99ef5143091", Controller:(*bool)(0xc002237cfa), BlockOwnerDeletion:(*bool)(0xc002237cfb)}} Feb 4 13:43:03.506: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"964bb3c5-ce20-4d96-8e9d-baff0a6426ac", Controller:(*bool)(0xc0030f9212), BlockOwnerDeletion:(*bool)(0xc0030f9213)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 4 13:43:08.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9659" for this suite. Feb 4 13:43:14.594: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 13:43:14.711: INFO: namespace gc-9659 deletion completed in 6.165745071s • [SLOW TEST:11.692 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 4 13:43:14.712: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Feb 4 13:43:22.968: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-1dde1081-3437-47bc-a03e-013665763921,GenerateName:,Namespace:events-7966,SelfLink:/api/v1/namespaces/events-7966/pods/send-events-1dde1081-3437-47bc-a03e-013665763921,UID:3aba1e49-8e60-4b02-9ad1-5fa469d8b1e7,ResourceVersion:23070841,Generation:0,CreationTimestamp:2020-02-04 13:43:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 786367685,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hxgb5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hxgb5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] [] [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-hxgb5 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000faea20} {node.kubernetes.io/unreachable Exists NoExecute 0xc000faea40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 13:43:15 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 13:43:22 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 13:43:22 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 13:43:14 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2020-02-04 13:43:15 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-02-04 13:43:21 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 docker-pullable://gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 docker://ec5ca2eb9cf37102b9a8c8020ba1a402928b15160e9d0ace3c8b341844d9975e}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} STEP: checking for scheduler event about the pod Feb 4 13:43:24.986: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Feb 4 13:43:27.002: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 4 13:43:27.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-7966" for this suite. Feb 4 13:44:07.137: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 13:44:07.333: INFO: namespace events-7966 deletion completed in 40.292174279s • [SLOW TEST:52.621 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 4 13:44:07.334: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Feb 4 13:44:07.415: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3e08a3b6-8c6e-47ba-9384-c1de2df34938" in namespace "downward-api-873" to be "success or failure" Feb 4 13:44:07.424: INFO: Pod "downwardapi-volume-3e08a3b6-8c6e-47ba-9384-c1de2df34938": Phase="Pending", Reason="", readiness=false. Elapsed: 9.232324ms Feb 4 13:44:09.447: INFO: Pod "downwardapi-volume-3e08a3b6-8c6e-47ba-9384-c1de2df34938": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031416254s Feb 4 13:44:11.461: INFO: Pod "downwardapi-volume-3e08a3b6-8c6e-47ba-9384-c1de2df34938": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046087894s Feb 4 13:44:13.469: INFO: Pod "downwardapi-volume-3e08a3b6-8c6e-47ba-9384-c1de2df34938": Phase="Pending", Reason="", readiness=false. Elapsed: 6.053318438s Feb 4 13:44:15.475: INFO: Pod "downwardapi-volume-3e08a3b6-8c6e-47ba-9384-c1de2df34938": Phase="Pending", Reason="", readiness=false. Elapsed: 8.059345098s Feb 4 13:44:17.488: INFO: Pod "downwardapi-volume-3e08a3b6-8c6e-47ba-9384-c1de2df34938": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.07316218s STEP: Saw pod success Feb 4 13:44:17.489: INFO: Pod "downwardapi-volume-3e08a3b6-8c6e-47ba-9384-c1de2df34938" satisfied condition "success or failure" Feb 4 13:44:17.495: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-3e08a3b6-8c6e-47ba-9384-c1de2df34938 container client-container: STEP: delete the pod Feb 4 13:44:17.643: INFO: Waiting for pod downwardapi-volume-3e08a3b6-8c6e-47ba-9384-c1de2df34938 to disappear Feb 4 13:44:17.659: INFO: Pod downwardapi-volume-3e08a3b6-8c6e-47ba-9384-c1de2df34938 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 4 13:44:17.659: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-873" for this suite. Feb 4 13:44:23.708: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 13:44:23.828: INFO: namespace downward-api-873 deletion completed in 6.155558617s • [SLOW TEST:16.495 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 4 13:44:23.829: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-fe7117c8-1a6a-4f1f-9d21-cbde49a97d8b STEP: Creating a pod to test consume secrets Feb 4 13:44:23.971: INFO: Waiting up to 5m0s for pod "pod-secrets-1350d655-0628-41cd-af7b-e5b816eab24e" in namespace "secrets-3076" to be "success or failure" Feb 4 13:44:23.983: INFO: Pod "pod-secrets-1350d655-0628-41cd-af7b-e5b816eab24e": Phase="Pending", Reason="", readiness=false. Elapsed: 11.451997ms Feb 4 13:44:25.990: INFO: Pod "pod-secrets-1350d655-0628-41cd-af7b-e5b816eab24e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018923597s Feb 4 13:44:28.000: INFO: Pod "pod-secrets-1350d655-0628-41cd-af7b-e5b816eab24e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028300743s Feb 4 13:44:30.012: INFO: Pod "pod-secrets-1350d655-0628-41cd-af7b-e5b816eab24e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.040112141s Feb 4 13:44:32.031: INFO: Pod "pod-secrets-1350d655-0628-41cd-af7b-e5b816eab24e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.059675864s Feb 4 13:44:34.040: INFO: Pod "pod-secrets-1350d655-0628-41cd-af7b-e5b816eab24e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.068687909s STEP: Saw pod success Feb 4 13:44:34.040: INFO: Pod "pod-secrets-1350d655-0628-41cd-af7b-e5b816eab24e" satisfied condition "success or failure" Feb 4 13:44:34.044: INFO: Trying to get logs from node iruya-node pod pod-secrets-1350d655-0628-41cd-af7b-e5b816eab24e container secret-env-test: STEP: delete the pod Feb 4 13:44:34.281: INFO: Waiting for pod pod-secrets-1350d655-0628-41cd-af7b-e5b816eab24e to disappear Feb 4 13:44:34.291: INFO: Pod pod-secrets-1350d655-0628-41cd-af7b-e5b816eab24e no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 4 13:44:34.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3076" for this suite. Feb 4 13:44:40.386: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 13:44:40.534: INFO: namespace secrets-3076 deletion completed in 6.233858318s • [SLOW TEST:16.705 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 4 13:44:40.534: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Feb 4 13:44:52.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec pod-sharedvolume-8cc4bc1b-204e-4a0a-b553-70e4a81506ab -c busybox-main-container --namespace=emptydir-1238 -- cat /usr/share/volumeshare/shareddata.txt' Feb 4 13:44:53.189: INFO: stderr: "I0204 13:44:52.916405 999 log.go:172] (0xc0009402c0) (0xc00092e820) Create stream\nI0204 13:44:52.916494 999 log.go:172] (0xc0009402c0) (0xc00092e820) Stream added, broadcasting: 1\nI0204 13:44:52.925429 999 log.go:172] (0xc0009402c0) Reply frame received for 1\nI0204 13:44:52.925465 999 log.go:172] (0xc0009402c0) (0xc0005e01e0) Create stream\nI0204 13:44:52.925474 999 log.go:172] (0xc0009402c0) (0xc0005e01e0) Stream added, broadcasting: 3\nI0204 13:44:52.926899 999 log.go:172] (0xc0009402c0) Reply frame received for 3\nI0204 13:44:52.926926 999 log.go:172] (0xc0009402c0) (0xc00092e8c0) Create stream\nI0204 13:44:52.926936 999 log.go:172] (0xc0009402c0) (0xc00092e8c0) Stream added, broadcasting: 5\nI0204 13:44:52.929117 999 log.go:172] (0xc0009402c0) Reply frame received for 5\nI0204 13:44:53.046630 999 log.go:172] (0xc0009402c0) Data frame received for 3\nI0204 13:44:53.046730 999 log.go:172] (0xc0005e01e0) (3) Data frame handling\nI0204 13:44:53.046746 999 log.go:172] (0xc0005e01e0) (3) Data frame sent\nI0204 13:44:53.180326 999 log.go:172] (0xc0009402c0) Data frame received for 1\nI0204 13:44:53.180412 999 log.go:172] (0xc0009402c0) (0xc00092e8c0) Stream removed, broadcasting: 5\nI0204 13:44:53.180470 999 log.go:172] (0xc00092e820) (1) Data frame handling\nI0204 13:44:53.180501 999 log.go:172] (0xc00092e820) (1) Data frame sent\nI0204 13:44:53.180524 999 log.go:172] (0xc0009402c0) (0xc0005e01e0) Stream removed, broadcasting: 3\nI0204 13:44:53.180564 999 log.go:172] (0xc0009402c0) (0xc00092e820) Stream removed, broadcasting: 1\nI0204 13:44:53.180674 999 log.go:172] (0xc0009402c0) Go away received\nI0204 13:44:53.181149 999 log.go:172] (0xc0009402c0) (0xc00092e820) Stream removed, broadcasting: 1\nI0204 13:44:53.181182 999 log.go:172] (0xc0009402c0) (0xc0005e01e0) Stream removed, broadcasting: 3\nI0204 13:44:53.181201 999 log.go:172] (0xc0009402c0) (0xc00092e8c0) Stream removed, broadcasting: 5\n" Feb 4 13:44:53.189: INFO: stdout: "Hello from the busy-box sub-container\n" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 4 13:44:53.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1238" for this suite. Feb 4 13:44:59.228: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 13:44:59.326: INFO: namespace emptydir-1238 deletion completed in 6.127281151s • [SLOW TEST:18.792 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 4 13:44:59.326: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Feb 4 13:44:59.397: INFO: Creating deployment "test-recreate-deployment" Feb 4 13:44:59.405: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Feb 4 13:44:59.481: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Feb 4 13:45:01.500: INFO: Waiting deployment "test-recreate-deployment" to complete Feb 4 13:45:01.504: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716420699, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716420699, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716420699, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716420699, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 4 13:45:03.516: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716420699, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716420699, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716420699, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716420699, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 4 13:45:05.515: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716420699, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716420699, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716420699, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716420699, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 4 13:45:07.520: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716420699, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716420699, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716420699, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716420699, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 4 13:45:09.516: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Feb 4 13:45:09.532: INFO: Updating deployment test-recreate-deployment Feb 4 13:45:09.532: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Feb 4 13:45:10.083: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:deployment-7659,SelfLink:/apis/apps/v1/namespaces/deployment-7659/deployments/test-recreate-deployment,UID:59e063b9-1577-45e4-a72b-b0650a45e64b,ResourceVersion:23071116,Generation:2,CreationTimestamp:2020-02-04 13:44:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-02-04 13:45:09 +0000 UTC 2020-02-04 13:45:09 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-02-04 13:45:09 +0000 UTC 2020-02-04 13:44:59 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-5c8c9cc69d" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},} Feb 4 13:45:10.092: INFO: New ReplicaSet "test-recreate-deployment-5c8c9cc69d" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d,GenerateName:,Namespace:deployment-7659,SelfLink:/apis/apps/v1/namespaces/deployment-7659/replicasets/test-recreate-deployment-5c8c9cc69d,UID:fc36364b-8f73-473a-a229-06c4b4990779,ResourceVersion:23071113,Generation:1,CreationTimestamp:2020-02-04 13:45:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 59e063b9-1577-45e4-a72b-b0650a45e64b 0xc003006817 0xc003006818}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Feb 4 13:45:10.092: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Feb 4 13:45:10.093: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-6df85df6b9,GenerateName:,Namespace:deployment-7659,SelfLink:/apis/apps/v1/namespaces/deployment-7659/replicasets/test-recreate-deployment-6df85df6b9,UID:f06117cc-10b5-4c21-b5f9-5b4dd39a21eb,ResourceVersion:23071103,Generation:2,CreationTimestamp:2020-02-04 13:44:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 59e063b9-1577-45e4-a72b-b0650a45e64b 0xc0030068e7 0xc0030068e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Feb 4 13:45:10.097: INFO: Pod "test-recreate-deployment-5c8c9cc69d-nz4jw" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d-nz4jw,GenerateName:test-recreate-deployment-5c8c9cc69d-,Namespace:deployment-7659,SelfLink:/api/v1/namespaces/deployment-7659/pods/test-recreate-deployment-5c8c9cc69d-nz4jw,UID:93be2fd6-8aea-4380-b73a-0ccc1f51d68c,ResourceVersion:23071117,Generation:0,CreationTimestamp:2020-02-04 13:45:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-5c8c9cc69d fc36364b-8f73-473a-a229-06c4b4990779 0xc0020bd787 0xc0020bd788}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7r4wg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7r4wg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-7r4wg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0020bd800} {node.kubernetes.io/unreachable Exists NoExecute 0xc0020bd820}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 13:45:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-04 13:45:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-04 13:45:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 13:45:09 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-02-04 13:45:10 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 4 13:45:10.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-7659" for this suite. Feb 4 13:45:16.122: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 13:45:16.198: INFO: namespace deployment-7659 deletion completed in 6.096283087s • [SLOW TEST:16.871 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 4 13:45:16.198: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-160e0dea-69ff-4325-80bf-55c48d683880 STEP: Creating secret with name s-test-opt-upd-be646a00-a54c-412c-ad37-c71f53e12478 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-160e0dea-69ff-4325-80bf-55c48d683880 STEP: Updating secret s-test-opt-upd-be646a00-a54c-412c-ad37-c71f53e12478 STEP: Creating secret with name s-test-opt-create-dc6b14d9-f008-44d4-ba3d-40c67ca6f02d STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 4 13:46:58.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8896" for this suite. Feb 4 13:47:20.811: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 13:47:20.969: INFO: namespace projected-8896 deletion completed in 22.181878976s • [SLOW TEST:124.771 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 4 13:47:20.970: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on node default medium Feb 4 13:47:21.097: INFO: Waiting up to 5m0s for pod "pod-5f2d872b-4b0c-4a24-b8c5-2f16e7982c5e" in namespace "emptydir-2736" to be "success or failure" Feb 4 13:47:21.110: INFO: Pod "pod-5f2d872b-4b0c-4a24-b8c5-2f16e7982c5e": Phase="Pending", Reason="", readiness=false. Elapsed: 12.549336ms Feb 4 13:47:23.117: INFO: Pod "pod-5f2d872b-4b0c-4a24-b8c5-2f16e7982c5e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019943801s Feb 4 13:47:25.127: INFO: Pod "pod-5f2d872b-4b0c-4a24-b8c5-2f16e7982c5e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030224035s Feb 4 13:47:27.141: INFO: Pod "pod-5f2d872b-4b0c-4a24-b8c5-2f16e7982c5e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.043903978s Feb 4 13:47:29.151: INFO: Pod "pod-5f2d872b-4b0c-4a24-b8c5-2f16e7982c5e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.053869014s Feb 4 13:47:31.161: INFO: Pod "pod-5f2d872b-4b0c-4a24-b8c5-2f16e7982c5e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.064531508s STEP: Saw pod success Feb 4 13:47:31.162: INFO: Pod "pod-5f2d872b-4b0c-4a24-b8c5-2f16e7982c5e" satisfied condition "success or failure" Feb 4 13:47:31.165: INFO: Trying to get logs from node iruya-node pod pod-5f2d872b-4b0c-4a24-b8c5-2f16e7982c5e container test-container: STEP: delete the pod Feb 4 13:47:31.484: INFO: Waiting for pod pod-5f2d872b-4b0c-4a24-b8c5-2f16e7982c5e to disappear Feb 4 13:47:31.571: INFO: Pod pod-5f2d872b-4b0c-4a24-b8c5-2f16e7982c5e no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 4 13:47:31.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2736" for this suite. Feb 4 13:47:37.622: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 13:47:37.833: INFO: namespace emptydir-2736 deletion completed in 6.243621965s • [SLOW TEST:16.863 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 4 13:47:37.835: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Feb 4 13:47:37.984: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Feb 4 13:47:38.000: INFO: Number of nodes with available pods: 0 Feb 4 13:47:38.000: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Feb 4 13:47:38.060: INFO: Number of nodes with available pods: 0 Feb 4 13:47:38.060: INFO: Node iruya-node is running more than one daemon pod Feb 4 13:47:39.073: INFO: Number of nodes with available pods: 0 Feb 4 13:47:39.073: INFO: Node iruya-node is running more than one daemon pod Feb 4 13:47:40.067: INFO: Number of nodes with available pods: 0 Feb 4 13:47:40.067: INFO: Node iruya-node is running more than one daemon pod Feb 4 13:47:41.072: INFO: Number of nodes with available pods: 0 Feb 4 13:47:41.073: INFO: Node iruya-node is running more than one daemon pod Feb 4 13:47:42.070: INFO: Number of nodes with available pods: 0 Feb 4 13:47:42.070: INFO: Node iruya-node is running more than one daemon pod Feb 4 13:47:43.074: INFO: Number of nodes with available pods: 0 Feb 4 13:47:43.075: INFO: Node iruya-node is running more than one daemon pod Feb 4 13:47:44.071: INFO: Number of nodes with available pods: 0 Feb 4 13:47:44.071: INFO: Node iruya-node is running more than one daemon pod Feb 4 13:47:45.067: INFO: Number of nodes with available pods: 0 Feb 4 13:47:45.067: INFO: Node iruya-node is running more than one daemon pod Feb 4 13:47:46.068: INFO: Number of nodes with available pods: 0 Feb 4 13:47:46.068: INFO: Node iruya-node is running more than one daemon pod Feb 4 13:47:47.073: INFO: Number of nodes with available pods: 1 Feb 4 13:47:47.073: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Feb 4 13:47:47.121: INFO: Number of nodes with available pods: 1 Feb 4 13:47:47.121: INFO: Number of running nodes: 0, number of available pods: 1 Feb 4 13:47:48.129: INFO: Number of nodes with available pods: 0 Feb 4 13:47:48.130: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Feb 4 13:47:48.147: INFO: Number of nodes with available pods: 0 Feb 4 13:47:48.147: INFO: Node iruya-node is running more than one daemon pod Feb 4 13:47:49.157: INFO: Number of nodes with available pods: 0 Feb 4 13:47:49.157: INFO: Node iruya-node is running more than one daemon pod Feb 4 13:47:50.157: INFO: Number of nodes with available pods: 0 Feb 4 13:47:50.157: INFO: Node iruya-node is running more than one daemon pod Feb 4 13:47:51.160: INFO: Number of nodes with available pods: 0 Feb 4 13:47:51.160: INFO: Node iruya-node is running more than one daemon pod Feb 4 13:47:52.158: INFO: Number of nodes with available pods: 0 Feb 4 13:47:52.158: INFO: Node iruya-node is running more than one daemon pod Feb 4 13:47:53.157: INFO: Number of nodes with available pods: 0 Feb 4 13:47:53.157: INFO: Node iruya-node is running more than one daemon pod Feb 4 13:47:54.159: INFO: Number of nodes with available pods: 0 Feb 4 13:47:54.159: INFO: Node iruya-node is running more than one daemon pod Feb 4 13:47:55.158: INFO: Number of nodes with available pods: 0 Feb 4 13:47:55.159: INFO: Node iruya-node is running more than one daemon pod Feb 4 13:47:56.159: INFO: Number of nodes with available pods: 0 Feb 4 13:47:56.159: INFO: Node iruya-node is running more than one daemon pod Feb 4 13:47:57.161: INFO: Number of nodes with available pods: 0 Feb 4 13:47:57.161: INFO: Node iruya-node is running more than one daemon pod Feb 4 13:47:58.222: INFO: Number of nodes with available pods: 0 Feb 4 13:47:58.222: INFO: Node iruya-node is running more than one daemon pod Feb 4 13:47:59.159: INFO: Number of nodes with available pods: 0 Feb 4 13:47:59.159: INFO: Node iruya-node is running more than one daemon pod Feb 4 13:48:00.157: INFO: Number of nodes with available pods: 0 Feb 4 13:48:00.157: INFO: Node iruya-node is running more than one daemon pod Feb 4 13:48:01.161: INFO: Number of nodes with available pods: 0 Feb 4 13:48:01.162: INFO: Node iruya-node is running more than one daemon pod Feb 4 13:48:02.158: INFO: Number of nodes with available pods: 1 Feb 4 13:48:02.158: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4668, will wait for the garbage collector to delete the pods Feb 4 13:48:02.234: INFO: Deleting DaemonSet.extensions daemon-set took: 15.38076ms Feb 4 13:48:02.535: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.751481ms Feb 4 13:48:09.144: INFO: Number of nodes with available pods: 0 Feb 4 13:48:09.144: INFO: Number of running nodes: 0, number of available pods: 0 Feb 4 13:48:09.148: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4668/daemonsets","resourceVersion":"23071499"},"items":null} Feb 4 13:48:09.152: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4668/pods","resourceVersion":"23071499"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 4 13:48:09.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-4668" for this suite. Feb 4 13:48:15.222: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 13:48:15.327: INFO: namespace daemonsets-4668 deletion completed in 6.127319027s • [SLOW TEST:37.492 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 4 13:48:15.327: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-cj6g STEP: Creating a pod to test atomic-volume-subpath Feb 4 13:48:15.527: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-cj6g" in namespace "subpath-2818" to be "success or failure" Feb 4 13:48:15.544: INFO: Pod "pod-subpath-test-configmap-cj6g": Phase="Pending", Reason="", readiness=false. Elapsed: 17.433967ms Feb 4 13:48:17.555: INFO: Pod "pod-subpath-test-configmap-cj6g": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027946412s Feb 4 13:48:19.562: INFO: Pod "pod-subpath-test-configmap-cj6g": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034679929s Feb 4 13:48:21.568: INFO: Pod "pod-subpath-test-configmap-cj6g": Phase="Pending", Reason="", readiness=false. Elapsed: 6.04075213s Feb 4 13:48:23.576: INFO: Pod "pod-subpath-test-configmap-cj6g": Phase="Pending", Reason="", readiness=false. Elapsed: 8.048953559s Feb 4 13:48:25.584: INFO: Pod "pod-subpath-test-configmap-cj6g": Phase="Running", Reason="", readiness=true. Elapsed: 10.057014541s Feb 4 13:48:27.597: INFO: Pod "pod-subpath-test-configmap-cj6g": Phase="Running", Reason="", readiness=true. Elapsed: 12.069678641s Feb 4 13:48:29.605: INFO: Pod "pod-subpath-test-configmap-cj6g": Phase="Running", Reason="", readiness=true. Elapsed: 14.0784021s Feb 4 13:48:31.617: INFO: Pod "pod-subpath-test-configmap-cj6g": Phase="Running", Reason="", readiness=true. Elapsed: 16.089815525s Feb 4 13:48:33.630: INFO: Pod "pod-subpath-test-configmap-cj6g": Phase="Running", Reason="", readiness=true. Elapsed: 18.103271691s Feb 4 13:48:35.643: INFO: Pod "pod-subpath-test-configmap-cj6g": Phase="Running", Reason="", readiness=true. Elapsed: 20.115960801s Feb 4 13:48:37.652: INFO: Pod "pod-subpath-test-configmap-cj6g": Phase="Running", Reason="", readiness=true. Elapsed: 22.124666461s Feb 4 13:48:39.667: INFO: Pod "pod-subpath-test-configmap-cj6g": Phase="Running", Reason="", readiness=true. Elapsed: 24.139615909s Feb 4 13:48:41.677: INFO: Pod "pod-subpath-test-configmap-cj6g": Phase="Running", Reason="", readiness=true. Elapsed: 26.150359315s Feb 4 13:48:43.685: INFO: Pod "pod-subpath-test-configmap-cj6g": Phase="Running", Reason="", readiness=true. Elapsed: 28.15834589s Feb 4 13:48:45.692: INFO: Pod "pod-subpath-test-configmap-cj6g": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.165334055s STEP: Saw pod success Feb 4 13:48:45.692: INFO: Pod "pod-subpath-test-configmap-cj6g" satisfied condition "success or failure" Feb 4 13:48:45.696: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-configmap-cj6g container test-container-subpath-configmap-cj6g: STEP: delete the pod Feb 4 13:48:45.748: INFO: Waiting for pod pod-subpath-test-configmap-cj6g to disappear Feb 4 13:48:45.862: INFO: Pod pod-subpath-test-configmap-cj6g no longer exists STEP: Deleting pod pod-subpath-test-configmap-cj6g Feb 4 13:48:45.862: INFO: Deleting pod "pod-subpath-test-configmap-cj6g" in namespace "subpath-2818" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 4 13:48:45.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-2818" for this suite. Feb 4 13:48:51.906: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 13:48:52.047: INFO: namespace subpath-2818 deletion completed in 6.173236313s • [SLOW TEST:36.720 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 4 13:48:52.048: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-7a331d8f-9976-49e7-86bd-05360a6bab28 STEP: Creating a pod to test consume configMaps Feb 4 13:48:52.219: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-7be0449f-e926-4188-b3a6-57c6c25fab0b" in namespace "projected-8244" to be "success or failure" Feb 4 13:48:52.248: INFO: Pod "pod-projected-configmaps-7be0449f-e926-4188-b3a6-57c6c25fab0b": Phase="Pending", Reason="", readiness=false. Elapsed: 29.061506ms Feb 4 13:48:54.258: INFO: Pod "pod-projected-configmaps-7be0449f-e926-4188-b3a6-57c6c25fab0b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039230822s Feb 4 13:48:56.277: INFO: Pod "pod-projected-configmaps-7be0449f-e926-4188-b3a6-57c6c25fab0b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.057971774s Feb 4 13:48:58.337: INFO: Pod "pod-projected-configmaps-7be0449f-e926-4188-b3a6-57c6c25fab0b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.118378017s Feb 4 13:49:00.412: INFO: Pod "pod-projected-configmaps-7be0449f-e926-4188-b3a6-57c6c25fab0b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.192902885s STEP: Saw pod success Feb 4 13:49:00.412: INFO: Pod "pod-projected-configmaps-7be0449f-e926-4188-b3a6-57c6c25fab0b" satisfied condition "success or failure" Feb 4 13:49:00.415: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-7be0449f-e926-4188-b3a6-57c6c25fab0b container projected-configmap-volume-test: STEP: delete the pod Feb 4 13:49:00.460: INFO: Waiting for pod pod-projected-configmaps-7be0449f-e926-4188-b3a6-57c6c25fab0b to disappear Feb 4 13:49:00.482: INFO: Pod pod-projected-configmaps-7be0449f-e926-4188-b3a6-57c6c25fab0b no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 4 13:49:00.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8244" for this suite. Feb 4 13:49:06.520: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 13:49:06.746: INFO: namespace projected-8244 deletion completed in 6.257621759s • [SLOW TEST:14.699 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 4 13:49:06.748: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 4 13:49:14.907: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-1990" for this suite. Feb 4 13:49:56.945: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 13:49:57.114: INFO: namespace kubelet-test-1990 deletion completed in 42.199730825s • [SLOW TEST:50.366 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 4 13:49:57.114: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override arguments Feb 4 13:49:57.225: INFO: Waiting up to 5m0s for pod "client-containers-7cf3e511-12cc-4c01-8400-5e4900812169" in namespace "containers-7175" to be "success or failure" Feb 4 13:49:57.232: INFO: Pod "client-containers-7cf3e511-12cc-4c01-8400-5e4900812169": Phase="Pending", Reason="", readiness=false. Elapsed: 7.165951ms Feb 4 13:49:59.242: INFO: Pod "client-containers-7cf3e511-12cc-4c01-8400-5e4900812169": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017029581s Feb 4 13:50:01.251: INFO: Pod "client-containers-7cf3e511-12cc-4c01-8400-5e4900812169": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026199367s Feb 4 13:50:03.280: INFO: Pod "client-containers-7cf3e511-12cc-4c01-8400-5e4900812169": Phase="Pending", Reason="", readiness=false. Elapsed: 6.055219755s Feb 4 13:50:05.288: INFO: Pod "client-containers-7cf3e511-12cc-4c01-8400-5e4900812169": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.063386149s STEP: Saw pod success Feb 4 13:50:05.288: INFO: Pod "client-containers-7cf3e511-12cc-4c01-8400-5e4900812169" satisfied condition "success or failure" Feb 4 13:50:05.293: INFO: Trying to get logs from node iruya-node pod client-containers-7cf3e511-12cc-4c01-8400-5e4900812169 container test-container: STEP: delete the pod Feb 4 13:50:05.369: INFO: Waiting for pod client-containers-7cf3e511-12cc-4c01-8400-5e4900812169 to disappear Feb 4 13:50:05.377: INFO: Pod client-containers-7cf3e511-12cc-4c01-8400-5e4900812169 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 4 13:50:05.377: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-7175" for this suite. Feb 4 13:50:11.429: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 13:50:11.580: INFO: namespace containers-7175 deletion completed in 6.197818211s • [SLOW TEST:14.466 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 4 13:50:11.582: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's command Feb 4 13:50:11.720: INFO: Waiting up to 5m0s for pod "var-expansion-816c4387-231f-4eea-9288-aeb5140254b4" in namespace "var-expansion-6210" to be "success or failure" Feb 4 13:50:11.729: INFO: Pod "var-expansion-816c4387-231f-4eea-9288-aeb5140254b4": Phase="Pending", Reason="", readiness=false. Elapsed: 9.674454ms Feb 4 13:50:13.741: INFO: Pod "var-expansion-816c4387-231f-4eea-9288-aeb5140254b4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021364273s Feb 4 13:50:15.752: INFO: Pod "var-expansion-816c4387-231f-4eea-9288-aeb5140254b4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032506621s Feb 4 13:50:17.763: INFO: Pod "var-expansion-816c4387-231f-4eea-9288-aeb5140254b4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.043334806s Feb 4 13:50:19.777: INFO: Pod "var-expansion-816c4387-231f-4eea-9288-aeb5140254b4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.057273137s STEP: Saw pod success Feb 4 13:50:19.777: INFO: Pod "var-expansion-816c4387-231f-4eea-9288-aeb5140254b4" satisfied condition "success or failure" Feb 4 13:50:19.791: INFO: Trying to get logs from node iruya-node pod var-expansion-816c4387-231f-4eea-9288-aeb5140254b4 container dapi-container: STEP: delete the pod Feb 4 13:50:19.840: INFO: Waiting for pod var-expansion-816c4387-231f-4eea-9288-aeb5140254b4 to disappear Feb 4 13:50:19.893: INFO: Pod var-expansion-816c4387-231f-4eea-9288-aeb5140254b4 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 4 13:50:19.893: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-6210" for this suite. Feb 4 13:50:25.920: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 13:50:26.050: INFO: namespace var-expansion-6210 deletion completed in 6.151040994s • [SLOW TEST:14.468 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 4 13:50:26.051: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-downwardapi-lpwh STEP: Creating a pod to test atomic-volume-subpath Feb 4 13:50:26.175: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-lpwh" in namespace "subpath-7151" to be "success or failure" Feb 4 13:50:26.178: INFO: Pod "pod-subpath-test-downwardapi-lpwh": Phase="Pending", Reason="", readiness=false. Elapsed: 2.837364ms Feb 4 13:50:28.189: INFO: Pod "pod-subpath-test-downwardapi-lpwh": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013227456s Feb 4 13:50:30.202: INFO: Pod "pod-subpath-test-downwardapi-lpwh": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026646786s Feb 4 13:50:32.216: INFO: Pod "pod-subpath-test-downwardapi-lpwh": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041052564s Feb 4 13:50:34.236: INFO: Pod "pod-subpath-test-downwardapi-lpwh": Phase="Pending", Reason="", readiness=false. Elapsed: 8.060986373s Feb 4 13:50:36.245: INFO: Pod "pod-subpath-test-downwardapi-lpwh": Phase="Running", Reason="", readiness=true. Elapsed: 10.069720531s Feb 4 13:50:38.256: INFO: Pod "pod-subpath-test-downwardapi-lpwh": Phase="Running", Reason="", readiness=true. Elapsed: 12.080775621s Feb 4 13:50:40.265: INFO: Pod "pod-subpath-test-downwardapi-lpwh": Phase="Running", Reason="", readiness=true. Elapsed: 14.089652464s Feb 4 13:50:42.271: INFO: Pod "pod-subpath-test-downwardapi-lpwh": Phase="Running", Reason="", readiness=true. Elapsed: 16.095694476s Feb 4 13:50:44.294: INFO: Pod "pod-subpath-test-downwardapi-lpwh": Phase="Running", Reason="", readiness=true. Elapsed: 18.118848522s Feb 4 13:50:46.301: INFO: Pod "pod-subpath-test-downwardapi-lpwh": Phase="Running", Reason="", readiness=true. Elapsed: 20.126009663s Feb 4 13:50:48.317: INFO: Pod "pod-subpath-test-downwardapi-lpwh": Phase="Running", Reason="", readiness=true. Elapsed: 22.141300548s Feb 4 13:50:50.333: INFO: Pod "pod-subpath-test-downwardapi-lpwh": Phase="Running", Reason="", readiness=true. Elapsed: 24.157522209s Feb 4 13:50:52.346: INFO: Pod "pod-subpath-test-downwardapi-lpwh": Phase="Running", Reason="", readiness=true. Elapsed: 26.170490113s Feb 4 13:50:54.352: INFO: Pod "pod-subpath-test-downwardapi-lpwh": Phase="Running", Reason="", readiness=true. Elapsed: 28.1765987s Feb 4 13:50:56.363: INFO: Pod "pod-subpath-test-downwardapi-lpwh": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.187768933s STEP: Saw pod success Feb 4 13:50:56.363: INFO: Pod "pod-subpath-test-downwardapi-lpwh" satisfied condition "success or failure" Feb 4 13:50:56.368: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-downwardapi-lpwh container test-container-subpath-downwardapi-lpwh: STEP: delete the pod Feb 4 13:50:56.421: INFO: Waiting for pod pod-subpath-test-downwardapi-lpwh to disappear Feb 4 13:50:56.429: INFO: Pod pod-subpath-test-downwardapi-lpwh no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-lpwh Feb 4 13:50:56.430: INFO: Deleting pod "pod-subpath-test-downwardapi-lpwh" in namespace "subpath-7151" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 4 13:50:56.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-7151" for this suite. Feb 4 13:51:02.580: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 13:51:02.690: INFO: namespace subpath-7151 deletion completed in 6.240185007s • [SLOW TEST:36.640 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 4 13:51:02.691: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Feb 4 13:51:11.442: INFO: Successfully updated pod "labelsupdate34963b20-b36a-49ff-961f-9f289b881729" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 4 13:51:13.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8350" for this suite. Feb 4 13:51:35.549: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 13:51:35.696: INFO: namespace projected-8350 deletion completed in 22.173282586s • [SLOW TEST:33.005 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 4 13:51:35.697: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating replication controller svc-latency-rc in namespace svc-latency-7897 I0204 13:51:35.832695 8 runners.go:180] Created replication controller with name: svc-latency-rc, namespace: svc-latency-7897, replica count: 1 I0204 13:51:36.884388 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0204 13:51:37.884933 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0204 13:51:38.885757 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0204 13:51:39.886604 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0204 13:51:40.887403 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0204 13:51:41.887893 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0204 13:51:42.888592 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0204 13:51:43.889265 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0204 13:51:44.889809 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Feb 4 13:51:45.116: INFO: Created: latency-svc-n8zn5 Feb 4 13:51:45.117: INFO: Got endpoints: latency-svc-n8zn5 [126.714085ms] Feb 4 13:51:45.199: INFO: Created: latency-svc-2xlhj Feb 4 13:51:45.324: INFO: Got endpoints: latency-svc-2xlhj [205.760659ms] Feb 4 13:51:45.344: INFO: Created: latency-svc-cs5v8 Feb 4 13:51:45.355: INFO: Got endpoints: latency-svc-cs5v8 [237.255516ms] Feb 4 13:51:45.387: INFO: Created: latency-svc-rbqvl Feb 4 13:51:45.404: INFO: Got endpoints: latency-svc-rbqvl [286.512537ms] Feb 4 13:51:45.548: INFO: Created: latency-svc-x5xk2 Feb 4 13:51:45.572: INFO: Got endpoints: latency-svc-x5xk2 [453.567674ms] Feb 4 13:51:45.606: INFO: Created: latency-svc-t74g7 Feb 4 13:51:45.615: INFO: Got endpoints: latency-svc-t74g7 [496.11345ms] Feb 4 13:51:45.923: INFO: Created: latency-svc-srrgk Feb 4 13:51:45.935: INFO: Got endpoints: latency-svc-srrgk [817.937682ms] Feb 4 13:51:46.082: INFO: Created: latency-svc-72qgk Feb 4 13:51:46.084: INFO: Got endpoints: latency-svc-72qgk [965.535542ms] Feb 4 13:51:46.248: INFO: Created: latency-svc-dgbxq Feb 4 13:51:46.262: INFO: Got endpoints: latency-svc-dgbxq [1.143225504s] Feb 4 13:51:46.323: INFO: Created: latency-svc-fx2fs Feb 4 13:51:46.462: INFO: Got endpoints: latency-svc-fx2fs [1.343766023s] Feb 4 13:51:46.477: INFO: Created: latency-svc-klkbm Feb 4 13:51:46.516: INFO: Got endpoints: latency-svc-klkbm [1.39718261s] Feb 4 13:51:46.563: INFO: Created: latency-svc-8cfsq Feb 4 13:51:46.724: INFO: Got endpoints: latency-svc-8cfsq [1.605194258s] Feb 4 13:51:46.727: INFO: Created: latency-svc-97jd6 Feb 4 13:51:46.749: INFO: Got endpoints: latency-svc-97jd6 [1.63107372s] Feb 4 13:51:46.793: INFO: Created: latency-svc-gv6zh Feb 4 13:51:46.891: INFO: Got endpoints: latency-svc-gv6zh [1.773123007s] Feb 4 13:51:46.937: INFO: Created: latency-svc-wxz7t Feb 4 13:51:46.951: INFO: Got endpoints: latency-svc-wxz7t [1.832894922s] Feb 4 13:51:47.109: INFO: Created: latency-svc-xbsqw Feb 4 13:51:47.109: INFO: Got endpoints: latency-svc-xbsqw [1.990510364s] Feb 4 13:51:47.160: INFO: Created: latency-svc-fhhcb Feb 4 13:51:47.168: INFO: Got endpoints: latency-svc-fhhcb [1.843317259s] Feb 4 13:51:47.285: INFO: Created: latency-svc-rnpm4 Feb 4 13:51:47.291: INFO: Got endpoints: latency-svc-rnpm4 [1.935905504s] Feb 4 13:51:47.339: INFO: Created: latency-svc-rhwmq Feb 4 13:51:47.348: INFO: Got endpoints: latency-svc-rhwmq [1.943747208s] Feb 4 13:51:47.392: INFO: Created: latency-svc-ldw79 Feb 4 13:51:47.460: INFO: Got endpoints: latency-svc-ldw79 [1.887517537s] Feb 4 13:51:47.510: INFO: Created: latency-svc-hspgm Feb 4 13:51:47.520: INFO: Got endpoints: latency-svc-hspgm [1.904989437s] Feb 4 13:51:47.542: INFO: Created: latency-svc-5zb5m Feb 4 13:51:47.659: INFO: Got endpoints: latency-svc-5zb5m [1.723321919s] Feb 4 13:51:47.703: INFO: Created: latency-svc-jmm6g Feb 4 13:51:47.713: INFO: Got endpoints: latency-svc-jmm6g [1.629702886s] Feb 4 13:51:47.859: INFO: Created: latency-svc-hhbc4 Feb 4 13:51:47.913: INFO: Got endpoints: latency-svc-hhbc4 [1.650130589s] Feb 4 13:51:47.922: INFO: Created: latency-svc-qz5dr Feb 4 13:51:47.933: INFO: Got endpoints: latency-svc-qz5dr [1.469904066s] Feb 4 13:51:48.069: INFO: Created: latency-svc-gmf7f Feb 4 13:51:48.081: INFO: Got endpoints: latency-svc-gmf7f [1.564386034s] Feb 4 13:51:48.126: INFO: Created: latency-svc-nxgtj Feb 4 13:51:48.217: INFO: Got endpoints: latency-svc-nxgtj [1.492690501s] Feb 4 13:51:48.222: INFO: Created: latency-svc-flfrz Feb 4 13:51:48.231: INFO: Got endpoints: latency-svc-flfrz [1.48179589s] Feb 4 13:51:48.273: INFO: Created: latency-svc-dqnhl Feb 4 13:51:48.294: INFO: Got endpoints: latency-svc-dqnhl [1.402803188s] Feb 4 13:51:48.423: INFO: Created: latency-svc-6twmn Feb 4 13:51:48.441: INFO: Got endpoints: latency-svc-6twmn [1.48941052s] Feb 4 13:51:48.490: INFO: Created: latency-svc-c7jcs Feb 4 13:51:48.491: INFO: Got endpoints: latency-svc-c7jcs [1.382090557s] Feb 4 13:51:48.626: INFO: Created: latency-svc-kc8wz Feb 4 13:51:48.672: INFO: Got endpoints: latency-svc-kc8wz [1.503988795s] Feb 4 13:51:48.676: INFO: Created: latency-svc-nlwp8 Feb 4 13:51:48.695: INFO: Got endpoints: latency-svc-nlwp8 [1.40348092s] Feb 4 13:51:48.794: INFO: Created: latency-svc-x8jz6 Feb 4 13:51:48.835: INFO: Got endpoints: latency-svc-x8jz6 [1.487122406s] Feb 4 13:51:48.873: INFO: Created: latency-svc-82vsq Feb 4 13:51:48.881: INFO: Got endpoints: latency-svc-82vsq [1.420151187s] Feb 4 13:51:48.979: INFO: Created: latency-svc-q8bwg Feb 4 13:51:49.002: INFO: Got endpoints: latency-svc-q8bwg [1.482258721s] Feb 4 13:51:49.163: INFO: Created: latency-svc-zvc5s Feb 4 13:51:49.171: INFO: Got endpoints: latency-svc-zvc5s [1.511650599s] Feb 4 13:51:49.202: INFO: Created: latency-svc-5w2qx Feb 4 13:51:49.207: INFO: Got endpoints: latency-svc-5w2qx [1.493263446s] Feb 4 13:51:49.241: INFO: Created: latency-svc-w56ct Feb 4 13:51:49.247: INFO: Got endpoints: latency-svc-w56ct [1.333266543s] Feb 4 13:51:49.329: INFO: Created: latency-svc-vhs29 Feb 4 13:51:49.365: INFO: Got endpoints: latency-svc-vhs29 [1.432341845s] Feb 4 13:51:49.381: INFO: Created: latency-svc-t26tf Feb 4 13:51:49.383: INFO: Got endpoints: latency-svc-t26tf [1.301416758s] Feb 4 13:51:49.432: INFO: Created: latency-svc-pfh4z Feb 4 13:51:49.530: INFO: Got endpoints: latency-svc-pfh4z [1.312444232s] Feb 4 13:51:49.568: INFO: Created: latency-svc-4rbhv Feb 4 13:51:49.573: INFO: Got endpoints: latency-svc-4rbhv [1.342216359s] Feb 4 13:51:49.619: INFO: Created: latency-svc-sds7m Feb 4 13:51:49.681: INFO: Got endpoints: latency-svc-sds7m [1.386678645s] Feb 4 13:51:49.728: INFO: Created: latency-svc-bcbbv Feb 4 13:51:49.729: INFO: Got endpoints: latency-svc-bcbbv [1.287183899s] Feb 4 13:51:49.787: INFO: Created: latency-svc-gqkmz Feb 4 13:51:49.858: INFO: Got endpoints: latency-svc-gqkmz [1.366225431s] Feb 4 13:51:49.873: INFO: Created: latency-svc-8cdj5 Feb 4 13:51:49.890: INFO: Got endpoints: latency-svc-8cdj5 [1.217552887s] Feb 4 13:51:49.927: INFO: Created: latency-svc-jknxd Feb 4 13:51:49.928: INFO: Got endpoints: latency-svc-jknxd [1.232759495s] Feb 4 13:51:50.017: INFO: Created: latency-svc-44vf2 Feb 4 13:51:50.030: INFO: Got endpoints: latency-svc-44vf2 [139.080978ms] Feb 4 13:51:50.074: INFO: Created: latency-svc-pv6hp Feb 4 13:51:50.077: INFO: Got endpoints: latency-svc-pv6hp [1.241656098s] Feb 4 13:51:50.108: INFO: Created: latency-svc-45vbg Feb 4 13:51:50.110: INFO: Got endpoints: latency-svc-45vbg [1.229502783s] Feb 4 13:51:50.197: INFO: Created: latency-svc-rdlbh Feb 4 13:51:50.203: INFO: Got endpoints: latency-svc-rdlbh [1.200515287s] Feb 4 13:51:50.260: INFO: Created: latency-svc-bglm5 Feb 4 13:51:50.275: INFO: Got endpoints: latency-svc-bglm5 [1.103483776s] Feb 4 13:51:50.349: INFO: Created: latency-svc-zkkwx Feb 4 13:51:50.369: INFO: Got endpoints: latency-svc-zkkwx [1.161268109s] Feb 4 13:51:50.397: INFO: Created: latency-svc-xhhjj Feb 4 13:51:50.423: INFO: Got endpoints: latency-svc-xhhjj [1.175586007s] Feb 4 13:51:50.455: INFO: Created: latency-svc-zkkvv Feb 4 13:51:50.544: INFO: Got endpoints: latency-svc-zkkvv [1.178158112s] Feb 4 13:51:50.572: INFO: Created: latency-svc-2jcfv Feb 4 13:51:50.578: INFO: Got endpoints: latency-svc-2jcfv [1.194991179s] Feb 4 13:51:50.760: INFO: Created: latency-svc-w8ldx Feb 4 13:51:50.773: INFO: Got endpoints: latency-svc-w8ldx [1.242404809s] Feb 4 13:51:50.815: INFO: Created: latency-svc-zdvn2 Feb 4 13:51:50.825: INFO: Got endpoints: latency-svc-zdvn2 [1.251886656s] Feb 4 13:51:50.925: INFO: Created: latency-svc-5g7lj Feb 4 13:51:50.937: INFO: Got endpoints: latency-svc-5g7lj [1.255482222s] Feb 4 13:51:50.981: INFO: Created: latency-svc-dbkth Feb 4 13:51:50.995: INFO: Got endpoints: latency-svc-dbkth [1.266718842s] Feb 4 13:51:51.160: INFO: Created: latency-svc-64c4n Feb 4 13:51:51.172: INFO: Got endpoints: latency-svc-64c4n [1.314205309s] Feb 4 13:51:51.245: INFO: Created: latency-svc-2zsn7 Feb 4 13:51:51.245: INFO: Got endpoints: latency-svc-2zsn7 [1.316833116s] Feb 4 13:51:51.318: INFO: Created: latency-svc-wqjzz Feb 4 13:51:51.339: INFO: Got endpoints: latency-svc-wqjzz [1.309614036s] Feb 4 13:51:51.408: INFO: Created: latency-svc-vt88m Feb 4 13:51:51.527: INFO: Got endpoints: latency-svc-vt88m [1.45005762s] Feb 4 13:51:51.578: INFO: Created: latency-svc-shpmd Feb 4 13:51:51.578: INFO: Got endpoints: latency-svc-shpmd [1.467769185s] Feb 4 13:51:51.629: INFO: Created: latency-svc-v5cgl Feb 4 13:51:51.729: INFO: Got endpoints: latency-svc-v5cgl [1.525228754s] Feb 4 13:51:51.787: INFO: Created: latency-svc-lkr6r Feb 4 13:51:51.817: INFO: Got endpoints: latency-svc-lkr6r [1.542687334s] Feb 4 13:51:51.824: INFO: Created: latency-svc-22gvs Feb 4 13:51:51.897: INFO: Got endpoints: latency-svc-22gvs [1.528581781s] Feb 4 13:51:51.929: INFO: Created: latency-svc-m6mlz Feb 4 13:51:51.950: INFO: Got endpoints: latency-svc-m6mlz [1.527153594s] Feb 4 13:51:51.974: INFO: Created: latency-svc-xvnpm Feb 4 13:51:51.994: INFO: Got endpoints: latency-svc-xvnpm [1.449326056s] Feb 4 13:51:52.105: INFO: Created: latency-svc-bvt5m Feb 4 13:51:52.119: INFO: Got endpoints: latency-svc-bvt5m [1.541495878s] Feb 4 13:51:52.150: INFO: Created: latency-svc-t594j Feb 4 13:51:52.158: INFO: Got endpoints: latency-svc-t594j [1.384533627s] Feb 4 13:51:52.268: INFO: Created: latency-svc-52ccq Feb 4 13:51:52.275: INFO: Got endpoints: latency-svc-52ccq [1.449327635s] Feb 4 13:51:52.377: INFO: Created: latency-svc-q2t2w Feb 4 13:51:52.434: INFO: Got endpoints: latency-svc-q2t2w [1.496689834s] Feb 4 13:51:52.464: INFO: Created: latency-svc-gwxrt Feb 4 13:51:52.493: INFO: Got endpoints: latency-svc-gwxrt [1.496853831s] Feb 4 13:51:52.531: INFO: Created: latency-svc-wxcck Feb 4 13:51:52.618: INFO: Got endpoints: latency-svc-wxcck [1.445192621s] Feb 4 13:51:52.713: INFO: Created: latency-svc-s67ck Feb 4 13:51:52.875: INFO: Got endpoints: latency-svc-s67ck [1.630507124s] Feb 4 13:51:52.921: INFO: Created: latency-svc-d76sw Feb 4 13:51:52.940: INFO: Got endpoints: latency-svc-d76sw [1.600721097s] Feb 4 13:51:53.078: INFO: Created: latency-svc-d98gp Feb 4 13:51:53.084: INFO: Got endpoints: latency-svc-d98gp [1.556999268s] Feb 4 13:51:53.335: INFO: Created: latency-svc-2cstc Feb 4 13:51:53.351: INFO: Got endpoints: latency-svc-2cstc [1.77219553s] Feb 4 13:51:53.541: INFO: Created: latency-svc-wx5ch Feb 4 13:51:53.569: INFO: Got endpoints: latency-svc-wx5ch [1.840016915s] Feb 4 13:51:53.629: INFO: Created: latency-svc-5v7p7 Feb 4 13:51:53.752: INFO: Got endpoints: latency-svc-5v7p7 [1.934048775s] Feb 4 13:51:53.807: INFO: Created: latency-svc-p4dzv Feb 4 13:51:53.832: INFO: Got endpoints: latency-svc-p4dzv [1.933942516s] Feb 4 13:51:53.968: INFO: Created: latency-svc-66r9w Feb 4 13:51:53.985: INFO: Got endpoints: latency-svc-66r9w [2.035167594s] Feb 4 13:51:54.141: INFO: Created: latency-svc-5mkk9 Feb 4 13:51:54.156: INFO: Got endpoints: latency-svc-5mkk9 [2.162140712s] Feb 4 13:51:54.202: INFO: Created: latency-svc-gvstg Feb 4 13:51:54.331: INFO: Got endpoints: latency-svc-gvstg [2.211940576s] Feb 4 13:51:54.332: INFO: Created: latency-svc-k8q68 Feb 4 13:51:54.361: INFO: Got endpoints: latency-svc-k8q68 [2.203795299s] Feb 4 13:51:54.400: INFO: Created: latency-svc-lgfk8 Feb 4 13:51:54.412: INFO: Got endpoints: latency-svc-lgfk8 [2.137199056s] Feb 4 13:51:54.579: INFO: Created: latency-svc-cpk9w Feb 4 13:51:54.599: INFO: Got endpoints: latency-svc-cpk9w [2.164491391s] Feb 4 13:51:54.673: INFO: Created: latency-svc-zg2pn Feb 4 13:51:54.766: INFO: Got endpoints: latency-svc-zg2pn [2.273062071s] Feb 4 13:51:54.782: INFO: Created: latency-svc-qnwts Feb 4 13:51:54.790: INFO: Got endpoints: latency-svc-qnwts [2.171624599s] Feb 4 13:51:54.838: INFO: Created: latency-svc-724xb Feb 4 13:51:54.858: INFO: Got endpoints: latency-svc-724xb [1.982593409s] Feb 4 13:51:55.013: INFO: Created: latency-svc-q7rvd Feb 4 13:51:55.018: INFO: Got endpoints: latency-svc-q7rvd [2.077135578s] Feb 4 13:51:55.075: INFO: Created: latency-svc-zx66k Feb 4 13:51:55.077: INFO: Got endpoints: latency-svc-zx66k [1.992750956s] Feb 4 13:51:55.213: INFO: Created: latency-svc-4b8wl Feb 4 13:51:55.229: INFO: Got endpoints: latency-svc-4b8wl [1.878267642s] Feb 4 13:51:55.279: INFO: Created: latency-svc-wbp6l Feb 4 13:51:55.291: INFO: Got endpoints: latency-svc-wbp6l [1.721932286s] Feb 4 13:51:55.422: INFO: Created: latency-svc-8gcfm Feb 4 13:51:55.433: INFO: Got endpoints: latency-svc-8gcfm [1.681054276s] Feb 4 13:51:55.491: INFO: Created: latency-svc-t7gzl Feb 4 13:51:55.491: INFO: Got endpoints: latency-svc-t7gzl [1.659470349s] Feb 4 13:51:55.654: INFO: Created: latency-svc-xmljg Feb 4 13:51:55.663: INFO: Got endpoints: latency-svc-xmljg [1.677404331s] Feb 4 13:51:55.729: INFO: Created: latency-svc-ccxvl Feb 4 13:51:55.731: INFO: Got endpoints: latency-svc-ccxvl [1.574945812s] Feb 4 13:51:55.860: INFO: Created: latency-svc-kzjpm Feb 4 13:51:55.879: INFO: Got endpoints: latency-svc-kzjpm [1.547841747s] Feb 4 13:51:55.928: INFO: Created: latency-svc-2mbwc Feb 4 13:51:56.027: INFO: Got endpoints: latency-svc-2mbwc [1.664842312s] Feb 4 13:51:56.076: INFO: Created: latency-svc-454hj Feb 4 13:51:56.104: INFO: Got endpoints: latency-svc-454hj [1.691938963s] Feb 4 13:51:56.249: INFO: Created: latency-svc-mkdr5 Feb 4 13:51:56.249: INFO: Got endpoints: latency-svc-mkdr5 [1.64972764s] Feb 4 13:51:56.296: INFO: Created: latency-svc-rgntc Feb 4 13:51:56.306: INFO: Got endpoints: latency-svc-rgntc [1.53965986s] Feb 4 13:51:56.426: INFO: Created: latency-svc-s7jfm Feb 4 13:51:56.434: INFO: Got endpoints: latency-svc-s7jfm [1.644622175s] Feb 4 13:51:56.482: INFO: Created: latency-svc-nb9cd Feb 4 13:51:56.485: INFO: Got endpoints: latency-svc-nb9cd [1.626764405s] Feb 4 13:51:56.683: INFO: Created: latency-svc-fv95n Feb 4 13:51:56.690: INFO: Got endpoints: latency-svc-fv95n [1.672692108s] Feb 4 13:51:56.752: INFO: Created: latency-svc-62pdp Feb 4 13:51:56.756: INFO: Got endpoints: latency-svc-62pdp [1.678492404s] Feb 4 13:51:56.930: INFO: Created: latency-svc-98mrv Feb 4 13:51:56.981: INFO: Got endpoints: latency-svc-98mrv [1.751821205s] Feb 4 13:51:56.987: INFO: Created: latency-svc-kjg4p Feb 4 13:51:56.994: INFO: Got endpoints: latency-svc-kjg4p [1.702021314s] Feb 4 13:51:57.133: INFO: Created: latency-svc-g6v78 Feb 4 13:51:57.148: INFO: Got endpoints: latency-svc-g6v78 [1.714308354s] Feb 4 13:51:57.198: INFO: Created: latency-svc-mv7r6 Feb 4 13:51:57.209: INFO: Got endpoints: latency-svc-mv7r6 [1.717499213s] Feb 4 13:51:57.322: INFO: Created: latency-svc-sk7nj Feb 4 13:51:57.338: INFO: Got endpoints: latency-svc-sk7nj [1.67539903s] Feb 4 13:51:57.398: INFO: Created: latency-svc-4wrx5 Feb 4 13:51:57.560: INFO: Got endpoints: latency-svc-4wrx5 [1.828197902s] Feb 4 13:51:57.604: INFO: Created: latency-svc-92bb4 Feb 4 13:51:57.619: INFO: Got endpoints: latency-svc-92bb4 [1.738874779s] Feb 4 13:51:57.815: INFO: Created: latency-svc-n7crg Feb 4 13:51:57.820: INFO: Got endpoints: latency-svc-n7crg [1.792672685s] Feb 4 13:51:58.054: INFO: Created: latency-svc-xksl5 Feb 4 13:51:58.221: INFO: Got endpoints: latency-svc-xksl5 [2.116225547s] Feb 4 13:51:58.258: INFO: Created: latency-svc-jctgr Feb 4 13:51:58.264: INFO: Got endpoints: latency-svc-jctgr [2.014636962s] Feb 4 13:51:58.324: INFO: Created: latency-svc-v248t Feb 4 13:51:58.391: INFO: Got endpoints: latency-svc-v248t [2.084964497s] Feb 4 13:51:58.446: INFO: Created: latency-svc-vvnmq Feb 4 13:51:58.459: INFO: Got endpoints: latency-svc-vvnmq [2.024761362s] Feb 4 13:51:58.620: INFO: Created: latency-svc-dgpqp Feb 4 13:51:58.632: INFO: Got endpoints: latency-svc-dgpqp [2.146752207s] Feb 4 13:51:58.719: INFO: Created: latency-svc-862r8 Feb 4 13:51:58.785: INFO: Got endpoints: latency-svc-862r8 [2.094038784s] Feb 4 13:51:58.869: INFO: Created: latency-svc-q2vwc Feb 4 13:51:59.015: INFO: Got endpoints: latency-svc-q2vwc [2.258945509s] Feb 4 13:51:59.038: INFO: Created: latency-svc-nl9jh Feb 4 13:51:59.054: INFO: Got endpoints: latency-svc-nl9jh [2.072682907s] Feb 4 13:51:59.096: INFO: Created: latency-svc-5x27z Feb 4 13:51:59.109: INFO: Got endpoints: latency-svc-5x27z [2.114909686s] Feb 4 13:51:59.215: INFO: Created: latency-svc-zscj8 Feb 4 13:51:59.220: INFO: Got endpoints: latency-svc-zscj8 [2.072135968s] Feb 4 13:51:59.262: INFO: Created: latency-svc-h676j Feb 4 13:51:59.287: INFO: Got endpoints: latency-svc-h676j [2.077887775s] Feb 4 13:51:59.402: INFO: Created: latency-svc-bb9vw Feb 4 13:51:59.410: INFO: Got endpoints: latency-svc-bb9vw [2.070826291s] Feb 4 13:51:59.459: INFO: Created: latency-svc-qm68c Feb 4 13:51:59.469: INFO: Got endpoints: latency-svc-qm68c [1.908952506s] Feb 4 13:51:59.599: INFO: Created: latency-svc-c6zdn Feb 4 13:51:59.640: INFO: Got endpoints: latency-svc-c6zdn [2.021577647s] Feb 4 13:51:59.833: INFO: Created: latency-svc-pfdgw Feb 4 13:51:59.833: INFO: Got endpoints: latency-svc-pfdgw [2.013261848s] Feb 4 13:52:00.013: INFO: Created: latency-svc-pmtkj Feb 4 13:52:00.026: INFO: Got endpoints: latency-svc-pmtkj [1.804664437s] Feb 4 13:52:00.167: INFO: Created: latency-svc-b5v67 Feb 4 13:52:00.173: INFO: Got endpoints: latency-svc-b5v67 [1.909366004s] Feb 4 13:52:00.233: INFO: Created: latency-svc-gdgph Feb 4 13:52:00.259: INFO: Got endpoints: latency-svc-gdgph [1.867425503s] Feb 4 13:52:00.384: INFO: Created: latency-svc-zr2j7 Feb 4 13:52:00.397: INFO: Got endpoints: latency-svc-zr2j7 [1.937251299s] Feb 4 13:52:00.465: INFO: Created: latency-svc-7j7fq Feb 4 13:52:00.466: INFO: Got endpoints: latency-svc-7j7fq [1.833229762s] Feb 4 13:52:00.573: INFO: Created: latency-svc-dpxkl Feb 4 13:52:00.592: INFO: Got endpoints: latency-svc-dpxkl [1.806375787s] Feb 4 13:52:00.649: INFO: Created: latency-svc-h4ltb Feb 4 13:52:00.655: INFO: Got endpoints: latency-svc-h4ltb [1.640011532s] Feb 4 13:52:00.736: INFO: Created: latency-svc-jm9dn Feb 4 13:52:00.778: INFO: Got endpoints: latency-svc-jm9dn [1.72339712s] Feb 4 13:52:00.791: INFO: Created: latency-svc-s95d2 Feb 4 13:52:00.861: INFO: Got endpoints: latency-svc-s95d2 [1.75205754s] Feb 4 13:52:00.920: INFO: Created: latency-svc-xdnfc Feb 4 13:52:00.920: INFO: Got endpoints: latency-svc-xdnfc [1.699554035s] Feb 4 13:52:00.945: INFO: Created: latency-svc-k2twt Feb 4 13:52:00.947: INFO: Got endpoints: latency-svc-k2twt [1.659454107s] Feb 4 13:52:01.085: INFO: Created: latency-svc-jzzf7 Feb 4 13:52:01.097: INFO: Got endpoints: latency-svc-jzzf7 [1.686959141s] Feb 4 13:52:01.134: INFO: Created: latency-svc-5pmcv Feb 4 13:52:01.149: INFO: Got endpoints: latency-svc-5pmcv [1.679382523s] Feb 4 13:52:01.216: INFO: Created: latency-svc-wk2cx Feb 4 13:52:01.235: INFO: Got endpoints: latency-svc-wk2cx [1.594548569s] Feb 4 13:52:01.274: INFO: Created: latency-svc-s2m7f Feb 4 13:52:01.298: INFO: Got endpoints: latency-svc-s2m7f [1.46452569s] Feb 4 13:52:01.435: INFO: Created: latency-svc-z4q24 Feb 4 13:52:01.444: INFO: Got endpoints: latency-svc-z4q24 [1.41842359s] Feb 4 13:52:01.475: INFO: Created: latency-svc-lklht Feb 4 13:52:01.493: INFO: Got endpoints: latency-svc-lklht [1.319203225s] Feb 4 13:52:01.685: INFO: Created: latency-svc-wkmj7 Feb 4 13:52:01.694: INFO: Got endpoints: latency-svc-wkmj7 [1.434770415s] Feb 4 13:52:01.756: INFO: Created: latency-svc-6zdjb Feb 4 13:52:01.919: INFO: Got endpoints: latency-svc-6zdjb [1.522058505s] Feb 4 13:52:01.948: INFO: Created: latency-svc-2k8rh Feb 4 13:52:01.957: INFO: Got endpoints: latency-svc-2k8rh [1.490839023s] Feb 4 13:52:02.001: INFO: Created: latency-svc-gw2cv Feb 4 13:52:02.127: INFO: Got endpoints: latency-svc-gw2cv [1.534792321s] Feb 4 13:52:02.135: INFO: Created: latency-svc-vgrtm Feb 4 13:52:02.141: INFO: Got endpoints: latency-svc-vgrtm [1.485810757s] Feb 4 13:52:02.207: INFO: Created: latency-svc-ldgrw Feb 4 13:52:02.212: INFO: Got endpoints: latency-svc-ldgrw [1.432774528s] Feb 4 13:52:02.327: INFO: Created: latency-svc-c6hxd Feb 4 13:52:02.377: INFO: Got endpoints: latency-svc-c6hxd [1.515297825s] Feb 4 13:52:02.393: INFO: Created: latency-svc-nn2g9 Feb 4 13:52:02.401: INFO: Got endpoints: latency-svc-nn2g9 [1.48083668s] Feb 4 13:52:02.525: INFO: Created: latency-svc-2vqpk Feb 4 13:52:02.536: INFO: Got endpoints: latency-svc-2vqpk [1.58927221s] Feb 4 13:52:02.590: INFO: Created: latency-svc-x4hjv Feb 4 13:52:02.791: INFO: Created: latency-svc-jpr8h Feb 4 13:52:02.792: INFO: Got endpoints: latency-svc-x4hjv [1.693972111s] Feb 4 13:52:02.798: INFO: Got endpoints: latency-svc-jpr8h [1.648627641s] Feb 4 13:52:02.851: INFO: Created: latency-svc-p2ch4 Feb 4 13:52:02.851: INFO: Got endpoints: latency-svc-p2ch4 [1.615681096s] Feb 4 13:52:02.927: INFO: Created: latency-svc-dbl5w Feb 4 13:52:02.936: INFO: Got endpoints: latency-svc-dbl5w [1.638129809s] Feb 4 13:52:02.995: INFO: Created: latency-svc-2zbs4 Feb 4 13:52:02.999: INFO: Got endpoints: latency-svc-2zbs4 [1.55458843s] Feb 4 13:52:03.160: INFO: Created: latency-svc-9zznv Feb 4 13:52:03.163: INFO: Got endpoints: latency-svc-9zznv [1.670302922s] Feb 4 13:52:03.212: INFO: Created: latency-svc-8nddq Feb 4 13:52:03.217: INFO: Got endpoints: latency-svc-8nddq [1.522238999s] Feb 4 13:52:03.327: INFO: Created: latency-svc-dxxdb Feb 4 13:52:03.339: INFO: Got endpoints: latency-svc-dxxdb [1.41964015s] Feb 4 13:52:03.373: INFO: Created: latency-svc-mn8ck Feb 4 13:52:03.383: INFO: Got endpoints: latency-svc-mn8ck [1.426149753s] Feb 4 13:52:03.485: INFO: Created: latency-svc-5vl75 Feb 4 13:52:03.491: INFO: Got endpoints: latency-svc-5vl75 [1.363541473s] Feb 4 13:52:03.537: INFO: Created: latency-svc-tkxfs Feb 4 13:52:03.541: INFO: Got endpoints: latency-svc-tkxfs [1.39929056s] Feb 4 13:52:03.657: INFO: Created: latency-svc-6mzc5 Feb 4 13:52:03.682: INFO: Got endpoints: latency-svc-6mzc5 [1.469819932s] Feb 4 13:52:03.719: INFO: Created: latency-svc-jgnrb Feb 4 13:52:03.728: INFO: Got endpoints: latency-svc-jgnrb [1.350593076s] Feb 4 13:52:03.812: INFO: Created: latency-svc-dcs7p Feb 4 13:52:03.878: INFO: Got endpoints: latency-svc-dcs7p [1.47713841s] Feb 4 13:52:03.880: INFO: Created: latency-svc-hwcwf Feb 4 13:52:03.898: INFO: Got endpoints: latency-svc-hwcwf [1.361415474s] Feb 4 13:52:04.011: INFO: Created: latency-svc-qhqst Feb 4 13:52:04.056: INFO: Got endpoints: latency-svc-qhqst [1.263657255s] Feb 4 13:52:04.061: INFO: Created: latency-svc-w4r7k Feb 4 13:52:04.073: INFO: Got endpoints: latency-svc-w4r7k [1.274715582s] Feb 4 13:52:04.181: INFO: Created: latency-svc-77b9g Feb 4 13:52:04.307: INFO: Got endpoints: latency-svc-77b9g [1.456219887s] Feb 4 13:52:04.309: INFO: Created: latency-svc-9kq4s Feb 4 13:52:04.325: INFO: Got endpoints: latency-svc-9kq4s [1.388223734s] Feb 4 13:52:04.379: INFO: Created: latency-svc-svqgw Feb 4 13:52:04.390: INFO: Got endpoints: latency-svc-svqgw [1.390496124s] Feb 4 13:52:04.503: INFO: Created: latency-svc-cz6h7 Feb 4 13:52:04.524: INFO: Got endpoints: latency-svc-cz6h7 [1.361029059s] Feb 4 13:52:04.527: INFO: Created: latency-svc-l65np Feb 4 13:52:04.536: INFO: Got endpoints: latency-svc-l65np [1.319189187s] Feb 4 13:52:04.598: INFO: Created: latency-svc-hk6w6 Feb 4 13:52:04.696: INFO: Got endpoints: latency-svc-hk6w6 [1.357139533s] Feb 4 13:52:04.758: INFO: Created: latency-svc-wkxzh Feb 4 13:52:04.760: INFO: Got endpoints: latency-svc-wkxzh [1.376939241s] Feb 4 13:52:04.795: INFO: Created: latency-svc-xhz6t Feb 4 13:52:04.865: INFO: Got endpoints: latency-svc-xhz6t [1.374034027s] Feb 4 13:52:05.131: INFO: Created: latency-svc-b8kff Feb 4 13:52:05.136: INFO: Got endpoints: latency-svc-b8kff [1.595152881s] Feb 4 13:52:05.169: INFO: Created: latency-svc-lpgcw Feb 4 13:52:05.181: INFO: Got endpoints: latency-svc-lpgcw [1.499165097s] Feb 4 13:52:05.216: INFO: Created: latency-svc-954zc Feb 4 13:52:05.290: INFO: Got endpoints: latency-svc-954zc [1.56158514s] Feb 4 13:52:05.329: INFO: Created: latency-svc-7lnsp Feb 4 13:52:05.336: INFO: Got endpoints: latency-svc-7lnsp [1.456760017s] Feb 4 13:52:05.380: INFO: Created: latency-svc-9dt6r Feb 4 13:52:05.523: INFO: Got endpoints: latency-svc-9dt6r [1.62484758s] Feb 4 13:52:05.568: INFO: Created: latency-svc-897kg Feb 4 13:52:05.568: INFO: Got endpoints: latency-svc-897kg [1.510866019s] Feb 4 13:52:05.715: INFO: Created: latency-svc-2x8kr Feb 4 13:52:05.724: INFO: Got endpoints: latency-svc-2x8kr [1.65136036s] Feb 4 13:52:05.772: INFO: Created: latency-svc-t7lrt Feb 4 13:52:05.794: INFO: Got endpoints: latency-svc-t7lrt [1.486306136s] Feb 4 13:52:05.899: INFO: Created: latency-svc-mdlvx Feb 4 13:52:05.923: INFO: Got endpoints: latency-svc-mdlvx [1.597971427s] Feb 4 13:52:05.965: INFO: Created: latency-svc-xqvl7 Feb 4 13:52:05.972: INFO: Got endpoints: latency-svc-xqvl7 [1.58194306s] Feb 4 13:52:06.076: INFO: Created: latency-svc-tqq28 Feb 4 13:52:06.080: INFO: Got endpoints: latency-svc-tqq28 [1.555261456s] Feb 4 13:52:06.141: INFO: Created: latency-svc-zrgw8 Feb 4 13:52:06.159: INFO: Got endpoints: latency-svc-zrgw8 [1.622748267s] Feb 4 13:52:06.267: INFO: Created: latency-svc-z8v7k Feb 4 13:52:06.279: INFO: Got endpoints: latency-svc-z8v7k [1.582099573s] Feb 4 13:52:06.333: INFO: Created: latency-svc-2n9zm Feb 4 13:52:06.403: INFO: Got endpoints: latency-svc-2n9zm [1.642753927s] Feb 4 13:52:06.426: INFO: Created: latency-svc-66bkj Feb 4 13:52:06.429: INFO: Got endpoints: latency-svc-66bkj [1.5631059s] Feb 4 13:52:06.469: INFO: Created: latency-svc-j88s2 Feb 4 13:52:06.657: INFO: Created: latency-svc-2sz4x Feb 4 13:52:06.658: INFO: Got endpoints: latency-svc-j88s2 [1.521690076s] Feb 4 13:52:06.707: INFO: Got endpoints: latency-svc-2sz4x [1.525932167s] Feb 4 13:52:06.708: INFO: Latencies: [139.080978ms 205.760659ms 237.255516ms 286.512537ms 453.567674ms 496.11345ms 817.937682ms 965.535542ms 1.103483776s 1.143225504s 1.161268109s 1.175586007s 1.178158112s 1.194991179s 1.200515287s 1.217552887s 1.229502783s 1.232759495s 1.241656098s 1.242404809s 1.251886656s 1.255482222s 1.263657255s 1.266718842s 1.274715582s 1.287183899s 1.301416758s 1.309614036s 1.312444232s 1.314205309s 1.316833116s 1.319189187s 1.319203225s 1.333266543s 1.342216359s 1.343766023s 1.350593076s 1.357139533s 1.361029059s 1.361415474s 1.363541473s 1.366225431s 1.374034027s 1.376939241s 1.382090557s 1.384533627s 1.386678645s 1.388223734s 1.390496124s 1.39718261s 1.39929056s 1.402803188s 1.40348092s 1.41842359s 1.41964015s 1.420151187s 1.426149753s 1.432341845s 1.432774528s 1.434770415s 1.445192621s 1.449326056s 1.449327635s 1.45005762s 1.456219887s 1.456760017s 1.46452569s 1.467769185s 1.469819932s 1.469904066s 1.47713841s 1.48083668s 1.48179589s 1.482258721s 1.485810757s 1.486306136s 1.487122406s 1.48941052s 1.490839023s 1.492690501s 1.493263446s 1.496689834s 1.496853831s 1.499165097s 1.503988795s 1.510866019s 1.511650599s 1.515297825s 1.521690076s 1.522058505s 1.522238999s 1.525228754s 1.525932167s 1.527153594s 1.528581781s 1.534792321s 1.53965986s 1.541495878s 1.542687334s 1.547841747s 1.55458843s 1.555261456s 1.556999268s 1.56158514s 1.5631059s 1.564386034s 1.574945812s 1.58194306s 1.582099573s 1.58927221s 1.594548569s 1.595152881s 1.597971427s 1.600721097s 1.605194258s 1.615681096s 1.622748267s 1.62484758s 1.626764405s 1.629702886s 1.630507124s 1.63107372s 1.638129809s 1.640011532s 1.642753927s 1.644622175s 1.648627641s 1.64972764s 1.650130589s 1.65136036s 1.659454107s 1.659470349s 1.664842312s 1.670302922s 1.672692108s 1.67539903s 1.677404331s 1.678492404s 1.679382523s 1.681054276s 1.686959141s 1.691938963s 1.693972111s 1.699554035s 1.702021314s 1.714308354s 1.717499213s 1.721932286s 1.723321919s 1.72339712s 1.738874779s 1.751821205s 1.75205754s 1.77219553s 1.773123007s 1.792672685s 1.804664437s 1.806375787s 1.828197902s 1.832894922s 1.833229762s 1.840016915s 1.843317259s 1.867425503s 1.878267642s 1.887517537s 1.904989437s 1.908952506s 1.909366004s 1.933942516s 1.934048775s 1.935905504s 1.937251299s 1.943747208s 1.982593409s 1.990510364s 1.992750956s 2.013261848s 2.014636962s 2.021577647s 2.024761362s 2.035167594s 2.070826291s 2.072135968s 2.072682907s 2.077135578s 2.077887775s 2.084964497s 2.094038784s 2.114909686s 2.116225547s 2.137199056s 2.146752207s 2.162140712s 2.164491391s 2.171624599s 2.203795299s 2.211940576s 2.258945509s 2.273062071s] Feb 4 13:52:06.709: INFO: 50 %ile: 1.55458843s Feb 4 13:52:06.709: INFO: 90 %ile: 2.024761362s Feb 4 13:52:06.709: INFO: 99 %ile: 2.258945509s Feb 4 13:52:06.709: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 4 13:52:06.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-7897" for this suite. Feb 4 13:52:44.819: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 13:52:44.971: INFO: namespace svc-latency-7897 deletion completed in 38.207404879s • [SLOW TEST:69.274 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 4 13:52:44.974: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Feb 4 13:52:45.217: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-6436,SelfLink:/api/v1/namespaces/watch-6436/configmaps/e2e-watch-test-resource-version,UID:15323a7e-fd35-4403-8e52-b1ee877921e6,ResourceVersion:23073435,Generation:0,CreationTimestamp:2020-02-04 13:52:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Feb 4 13:52:45.217: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-6436,SelfLink:/api/v1/namespaces/watch-6436/configmaps/e2e-watch-test-resource-version,UID:15323a7e-fd35-4403-8e52-b1ee877921e6,ResourceVersion:23073436,Generation:0,CreationTimestamp:2020-02-04 13:52:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 4 13:52:45.217: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-6436" for this suite. Feb 4 13:52:51.298: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 13:52:51.441: INFO: namespace watch-6436 deletion completed in 6.21522588s • [SLOW TEST:6.467 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 4 13:52:51.441: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Feb 4 13:53:00.095: INFO: Successfully updated pod "pod-update-activedeadlineseconds-78c833e9-613e-4539-87a6-044f0f1836a6" Feb 4 13:53:00.095: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-78c833e9-613e-4539-87a6-044f0f1836a6" in namespace "pods-4292" to be "terminated due to deadline exceeded" Feb 4 13:53:00.116: INFO: Pod "pod-update-activedeadlineseconds-78c833e9-613e-4539-87a6-044f0f1836a6": Phase="Running", Reason="", readiness=true. Elapsed: 21.104094ms Feb 4 13:53:02.135: INFO: Pod "pod-update-activedeadlineseconds-78c833e9-613e-4539-87a6-044f0f1836a6": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.040081742s Feb 4 13:53:02.136: INFO: Pod "pod-update-activedeadlineseconds-78c833e9-613e-4539-87a6-044f0f1836a6" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 4 13:53:02.136: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4292" for this suite. Feb 4 13:53:08.193: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 13:53:08.477: INFO: namespace pods-4292 deletion completed in 6.331961188s • [SLOW TEST:17.036 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 4 13:53:08.478: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Feb 4 13:53:08.651: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-2336,SelfLink:/api/v1/namespaces/watch-2336/configmaps/e2e-watch-test-watch-closed,UID:6fac5bd7-73bd-4b43-8ee8-c573c06572d0,ResourceVersion:23073508,Generation:0,CreationTimestamp:2020-02-04 13:53:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Feb 4 13:53:08.651: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-2336,SelfLink:/api/v1/namespaces/watch-2336/configmaps/e2e-watch-test-watch-closed,UID:6fac5bd7-73bd-4b43-8ee8-c573c06572d0,ResourceVersion:23073509,Generation:0,CreationTimestamp:2020-02-04 13:53:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Feb 4 13:53:08.672: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-2336,SelfLink:/api/v1/namespaces/watch-2336/configmaps/e2e-watch-test-watch-closed,UID:6fac5bd7-73bd-4b43-8ee8-c573c06572d0,ResourceVersion:23073510,Generation:0,CreationTimestamp:2020-02-04 13:53:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Feb 4 13:53:08.673: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-2336,SelfLink:/api/v1/namespaces/watch-2336/configmaps/e2e-watch-test-watch-closed,UID:6fac5bd7-73bd-4b43-8ee8-c573c06572d0,ResourceVersion:23073511,Generation:0,CreationTimestamp:2020-02-04 13:53:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 4 13:53:08.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-2336" for this suite. Feb 4 13:53:14.699: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 4 13:53:14.834: INFO: namespace watch-2336 deletion completed in 6.155454242s • [SLOW TEST:6.356 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 4 13:53:14.835: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Feb 4 13:53:14.999: INFO: (0) /api/v1/nodes/iruya-node/proxy/logs/:
alternatives.log
alternatives.l... (200; 17.178165ms)
Feb  4 13:53:15.009: INFO: (1) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.772323ms)
Feb  4 13:53:15.014: INFO: (2) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.581016ms)
Feb  4 13:53:15.019: INFO: (3) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.94781ms)
Feb  4 13:53:15.025: INFO: (4) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.150457ms)
Feb  4 13:53:15.030: INFO: (5) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.499093ms)
Feb  4 13:53:15.035: INFO: (6) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.223322ms)
Feb  4 13:53:15.041: INFO: (7) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.164262ms)
Feb  4 13:53:15.045: INFO: (8) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.61591ms)
Feb  4 13:53:15.051: INFO: (9) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.118238ms)
Feb  4 13:53:15.056: INFO: (10) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.181827ms)
Feb  4 13:53:15.062: INFO: (11) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.196274ms)
Feb  4 13:53:15.068: INFO: (12) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.511172ms)
Feb  4 13:53:15.091: INFO: (13) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 22.491774ms)
Feb  4 13:53:15.096: INFO: (14) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.962531ms)
Feb  4 13:53:15.101: INFO: (15) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.736211ms)
Feb  4 13:53:15.106: INFO: (16) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.235592ms)
Feb  4 13:53:15.110: INFO: (17) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.433265ms)
Feb  4 13:53:15.115: INFO: (18) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.888961ms)
Feb  4 13:53:15.121: INFO: (19) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.617396ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 13:53:15.121: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-3286" for this suite.
Feb  4 13:53:21.147: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 13:53:21.227: INFO: namespace proxy-3286 deletion completed in 6.102633451s

• [SLOW TEST:6.393 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy logs on node using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  4 13:53:21.228: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-da327fb2-a157-45ec-bb9a-ff4309c9cc10
STEP: Creating a pod to test consume secrets
Feb  4 13:53:21.371: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-930bf6b1-f8a5-4706-9efc-511f98003e01" in namespace "projected-419" to be "success or failure"
Feb  4 13:53:21.383: INFO: Pod "pod-projected-secrets-930bf6b1-f8a5-4706-9efc-511f98003e01": Phase="Pending", Reason="", readiness=false. Elapsed: 12.240627ms
Feb  4 13:53:23.391: INFO: Pod "pod-projected-secrets-930bf6b1-f8a5-4706-9efc-511f98003e01": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019985493s
Feb  4 13:53:25.398: INFO: Pod "pod-projected-secrets-930bf6b1-f8a5-4706-9efc-511f98003e01": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026914643s
Feb  4 13:53:27.409: INFO: Pod "pod-projected-secrets-930bf6b1-f8a5-4706-9efc-511f98003e01": Phase="Pending", Reason="", readiness=false. Elapsed: 6.038529761s
Feb  4 13:53:29.427: INFO: Pod "pod-projected-secrets-930bf6b1-f8a5-4706-9efc-511f98003e01": Phase="Pending", Reason="", readiness=false. Elapsed: 8.05580318s
Feb  4 13:53:31.439: INFO: Pod "pod-projected-secrets-930bf6b1-f8a5-4706-9efc-511f98003e01": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.068195532s
STEP: Saw pod success
Feb  4 13:53:31.439: INFO: Pod "pod-projected-secrets-930bf6b1-f8a5-4706-9efc-511f98003e01" satisfied condition "success or failure"
Feb  4 13:53:31.444: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-930bf6b1-f8a5-4706-9efc-511f98003e01 container projected-secret-volume-test: 
STEP: delete the pod
Feb  4 13:53:31.538: INFO: Waiting for pod pod-projected-secrets-930bf6b1-f8a5-4706-9efc-511f98003e01 to disappear
Feb  4 13:53:31.546: INFO: Pod pod-projected-secrets-930bf6b1-f8a5-4706-9efc-511f98003e01 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 13:53:31.546: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-419" for this suite.
Feb  4 13:53:37.613: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 13:53:38.045: INFO: namespace projected-419 deletion completed in 6.490772132s

• [SLOW TEST:16.817 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  4 13:53:38.045: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb  4 13:53:38.118: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d334eeae-534b-4ee9-b877-a42d351b90e0" in namespace "downward-api-4964" to be "success or failure"
Feb  4 13:53:38.229: INFO: Pod "downwardapi-volume-d334eeae-534b-4ee9-b877-a42d351b90e0": Phase="Pending", Reason="", readiness=false. Elapsed: 111.196356ms
Feb  4 13:53:40.238: INFO: Pod "downwardapi-volume-d334eeae-534b-4ee9-b877-a42d351b90e0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.12021703s
Feb  4 13:53:42.245: INFO: Pod "downwardapi-volume-d334eeae-534b-4ee9-b877-a42d351b90e0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.127184712s
Feb  4 13:53:44.253: INFO: Pod "downwardapi-volume-d334eeae-534b-4ee9-b877-a42d351b90e0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.134674632s
Feb  4 13:53:46.263: INFO: Pod "downwardapi-volume-d334eeae-534b-4ee9-b877-a42d351b90e0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.145217081s
Feb  4 13:53:48.275: INFO: Pod "downwardapi-volume-d334eeae-534b-4ee9-b877-a42d351b90e0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.157230725s
STEP: Saw pod success
Feb  4 13:53:48.275: INFO: Pod "downwardapi-volume-d334eeae-534b-4ee9-b877-a42d351b90e0" satisfied condition "success or failure"
Feb  4 13:53:48.279: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-d334eeae-534b-4ee9-b877-a42d351b90e0 container client-container: 
STEP: delete the pod
Feb  4 13:53:48.540: INFO: Waiting for pod downwardapi-volume-d334eeae-534b-4ee9-b877-a42d351b90e0 to disappear
Feb  4 13:53:48.547: INFO: Pod downwardapi-volume-d334eeae-534b-4ee9-b877-a42d351b90e0 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 13:53:48.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4964" for this suite.
Feb  4 13:53:54.588: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 13:53:54.724: INFO: namespace downward-api-4964 deletion completed in 6.168512219s

• [SLOW TEST:16.678 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  4 13:53:54.725: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb  4 13:53:55.245: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cd023f2c-ebad-49bf-9e36-1724f8844d50" in namespace "projected-3779" to be "success or failure"
Feb  4 13:53:55.260: INFO: Pod "downwardapi-volume-cd023f2c-ebad-49bf-9e36-1724f8844d50": Phase="Pending", Reason="", readiness=false. Elapsed: 14.221449ms
Feb  4 13:53:57.275: INFO: Pod "downwardapi-volume-cd023f2c-ebad-49bf-9e36-1724f8844d50": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029881267s
Feb  4 13:53:59.280: INFO: Pod "downwardapi-volume-cd023f2c-ebad-49bf-9e36-1724f8844d50": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035102345s
Feb  4 13:54:01.289: INFO: Pod "downwardapi-volume-cd023f2c-ebad-49bf-9e36-1724f8844d50": Phase="Pending", Reason="", readiness=false. Elapsed: 6.043271335s
Feb  4 13:54:03.300: INFO: Pod "downwardapi-volume-cd023f2c-ebad-49bf-9e36-1724f8844d50": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.054752193s
STEP: Saw pod success
Feb  4 13:54:03.300: INFO: Pod "downwardapi-volume-cd023f2c-ebad-49bf-9e36-1724f8844d50" satisfied condition "success or failure"
Feb  4 13:54:03.304: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-cd023f2c-ebad-49bf-9e36-1724f8844d50 container client-container: 
STEP: delete the pod
Feb  4 13:54:03.429: INFO: Waiting for pod downwardapi-volume-cd023f2c-ebad-49bf-9e36-1724f8844d50 to disappear
Feb  4 13:54:03.435: INFO: Pod downwardapi-volume-cd023f2c-ebad-49bf-9e36-1724f8844d50 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 13:54:03.436: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3779" for this suite.
Feb  4 13:54:09.461: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 13:54:09.583: INFO: namespace projected-3779 deletion completed in 6.142763315s

• [SLOW TEST:14.859 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  4 13:54:09.584: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override command
Feb  4 13:54:09.735: INFO: Waiting up to 5m0s for pod "client-containers-0029bc4b-012d-4bd2-9345-bdca645767b6" in namespace "containers-8117" to be "success or failure"
Feb  4 13:54:09.740: INFO: Pod "client-containers-0029bc4b-012d-4bd2-9345-bdca645767b6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.251028ms
Feb  4 13:54:11.751: INFO: Pod "client-containers-0029bc4b-012d-4bd2-9345-bdca645767b6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015206569s
Feb  4 13:54:13.800: INFO: Pod "client-containers-0029bc4b-012d-4bd2-9345-bdca645767b6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.064154139s
Feb  4 13:54:15.811: INFO: Pod "client-containers-0029bc4b-012d-4bd2-9345-bdca645767b6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.075739457s
Feb  4 13:54:17.836: INFO: Pod "client-containers-0029bc4b-012d-4bd2-9345-bdca645767b6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.100610509s
Feb  4 13:54:19.862: INFO: Pod "client-containers-0029bc4b-012d-4bd2-9345-bdca645767b6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.126918704s
STEP: Saw pod success
Feb  4 13:54:19.863: INFO: Pod "client-containers-0029bc4b-012d-4bd2-9345-bdca645767b6" satisfied condition "success or failure"
Feb  4 13:54:19.879: INFO: Trying to get logs from node iruya-node pod client-containers-0029bc4b-012d-4bd2-9345-bdca645767b6 container test-container: 
STEP: delete the pod
Feb  4 13:54:19.947: INFO: Waiting for pod client-containers-0029bc4b-012d-4bd2-9345-bdca645767b6 to disappear
Feb  4 13:54:20.019: INFO: Pod client-containers-0029bc4b-012d-4bd2-9345-bdca645767b6 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 13:54:20.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-8117" for this suite.
Feb  4 13:54:26.069: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 13:54:26.153: INFO: namespace containers-8117 deletion completed in 6.128298198s

• [SLOW TEST:16.569 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  4 13:54:26.154: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Feb  4 13:54:35.253: INFO: Successfully updated pod "annotationupdate5e79cf69-5b30-4fb0-b1fa-cfd9d5600fb7"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 13:54:39.425: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6510" for this suite.
Feb  4 13:55:01.523: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 13:55:01.687: INFO: namespace projected-6510 deletion completed in 22.198416516s

• [SLOW TEST:35.533 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  4 13:55:01.687: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on tmpfs
Feb  4 13:55:01.904: INFO: Waiting up to 5m0s for pod "pod-4e100be5-c66b-42f0-b980-c4c760a4191b" in namespace "emptydir-5543" to be "success or failure"
Feb  4 13:55:01.953: INFO: Pod "pod-4e100be5-c66b-42f0-b980-c4c760a4191b": Phase="Pending", Reason="", readiness=false. Elapsed: 49.1076ms
Feb  4 13:55:03.972: INFO: Pod "pod-4e100be5-c66b-42f0-b980-c4c760a4191b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068092879s
Feb  4 13:55:05.987: INFO: Pod "pod-4e100be5-c66b-42f0-b980-c4c760a4191b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.083611853s
Feb  4 13:55:08.002: INFO: Pod "pod-4e100be5-c66b-42f0-b980-c4c760a4191b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.098726205s
Feb  4 13:55:10.098: INFO: Pod "pod-4e100be5-c66b-42f0-b980-c4c760a4191b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.19391266s
Feb  4 13:55:12.103: INFO: Pod "pod-4e100be5-c66b-42f0-b980-c4c760a4191b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.199476924s
STEP: Saw pod success
Feb  4 13:55:12.103: INFO: Pod "pod-4e100be5-c66b-42f0-b980-c4c760a4191b" satisfied condition "success or failure"
Feb  4 13:55:12.106: INFO: Trying to get logs from node iruya-node pod pod-4e100be5-c66b-42f0-b980-c4c760a4191b container test-container: 
STEP: delete the pod
Feb  4 13:55:12.208: INFO: Waiting for pod pod-4e100be5-c66b-42f0-b980-c4c760a4191b to disappear
Feb  4 13:55:12.218: INFO: Pod pod-4e100be5-c66b-42f0-b980-c4c760a4191b no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 13:55:12.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5543" for this suite.
Feb  4 13:55:18.406: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 13:55:18.588: INFO: namespace emptydir-5543 deletion completed in 6.362054931s

• [SLOW TEST:16.901 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  4 13:55:18.589: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name s-test-opt-del-9c4cca03-d575-412f-9fe6-979be94b39a0
STEP: Creating secret with name s-test-opt-upd-e00e5124-94b9-497e-b4b2-5e6b0fd73b98
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-9c4cca03-d575-412f-9fe6-979be94b39a0
STEP: Updating secret s-test-opt-upd-e00e5124-94b9-497e-b4b2-5e6b0fd73b98
STEP: Creating secret with name s-test-opt-create-5577a2fd-816c-4f5f-b963-91525c81899f
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 13:55:35.011: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8361" for this suite.
Feb  4 13:56:07.065: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 13:56:07.200: INFO: namespace secrets-8361 deletion completed in 32.180703566s

• [SLOW TEST:48.611 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  4 13:56:07.201: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb  4 13:56:07.252: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 13:56:08.459: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-4091" for this suite.
Feb  4 13:56:14.499: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 13:56:14.603: INFO: namespace custom-resource-definition-4091 deletion completed in 6.134413934s

• [SLOW TEST:7.402 seconds]
[sig-api-machinery] CustomResourceDefinition resources
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Simple CustomResourceDefinition
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35
    creating/deleting custom resource definition objects works  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-api-machinery] Secrets 
  should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  4 13:56:14.604: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name secret-emptykey-test-c07ec4d8-f0b6-47da-9363-599883884f5b
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 13:56:14.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4216" for this suite.
Feb  4 13:56:20.772: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 13:56:21.018: INFO: namespace secrets-4216 deletion completed in 6.268694922s

• [SLOW TEST:6.414 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  4 13:56:21.018: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-59f45544-3c77-46ec-8ce8-fecb7f42287f
STEP: Creating a pod to test consume configMaps
Feb  4 13:56:21.152: INFO: Waiting up to 5m0s for pod "pod-configmaps-14a8e925-5e92-4a08-8a7e-f584c8a7c9ed" in namespace "configmap-7031" to be "success or failure"
Feb  4 13:56:21.209: INFO: Pod "pod-configmaps-14a8e925-5e92-4a08-8a7e-f584c8a7c9ed": Phase="Pending", Reason="", readiness=false. Elapsed: 57.039794ms
Feb  4 13:56:23.216: INFO: Pod "pod-configmaps-14a8e925-5e92-4a08-8a7e-f584c8a7c9ed": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064106032s
Feb  4 13:56:25.223: INFO: Pod "pod-configmaps-14a8e925-5e92-4a08-8a7e-f584c8a7c9ed": Phase="Pending", Reason="", readiness=false. Elapsed: 4.071479321s
Feb  4 13:56:27.279: INFO: Pod "pod-configmaps-14a8e925-5e92-4a08-8a7e-f584c8a7c9ed": Phase="Pending", Reason="", readiness=false. Elapsed: 6.12700313s
Feb  4 13:56:29.330: INFO: Pod "pod-configmaps-14a8e925-5e92-4a08-8a7e-f584c8a7c9ed": Phase="Pending", Reason="", readiness=false. Elapsed: 8.178002943s
Feb  4 13:56:31.353: INFO: Pod "pod-configmaps-14a8e925-5e92-4a08-8a7e-f584c8a7c9ed": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.200715431s
STEP: Saw pod success
Feb  4 13:56:31.353: INFO: Pod "pod-configmaps-14a8e925-5e92-4a08-8a7e-f584c8a7c9ed" satisfied condition "success or failure"
Feb  4 13:56:31.356: INFO: Trying to get logs from node iruya-node pod pod-configmaps-14a8e925-5e92-4a08-8a7e-f584c8a7c9ed container configmap-volume-test: 
STEP: delete the pod
Feb  4 13:56:31.421: INFO: Waiting for pod pod-configmaps-14a8e925-5e92-4a08-8a7e-f584c8a7c9ed to disappear
Feb  4 13:56:31.437: INFO: Pod pod-configmaps-14a8e925-5e92-4a08-8a7e-f584c8a7c9ed no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 13:56:31.437: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7031" for this suite.
Feb  4 13:56:37.513: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 13:56:37.656: INFO: namespace configmap-7031 deletion completed in 6.213930125s

• [SLOW TEST:16.638 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  4 13:56:37.657: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 13:56:47.892: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-6409" for this suite.
Feb  4 13:57:39.942: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 13:57:40.078: INFO: namespace kubelet-test-6409 deletion completed in 52.174701621s

• [SLOW TEST:62.421 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a read only busybox container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187
    should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  4 13:57:40.079: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 13:57:40.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-4321" for this suite.
Feb  4 13:57:46.203: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 13:57:46.327: INFO: namespace services-4321 deletion completed in 6.140085478s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:6.249 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  4 13:57:46.327: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W0204 13:58:29.017348       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb  4 13:58:29.017: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 13:58:29.017: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-1513" for this suite.
Feb  4 13:58:37.080: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 13:58:37.163: INFO: namespace gc-1513 deletion completed in 8.140848155s

• [SLOW TEST:50.836 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  4 13:58:37.164: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W0204 13:58:49.998897       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb  4 13:58:49.998: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 13:58:49.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-2481" for this suite.
Feb  4 13:58:56.035: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 13:58:56.161: INFO: namespace gc-2481 deletion completed in 6.159130502s

• [SLOW TEST:18.998 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  4 13:58:56.162: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb  4 13:58:56.271: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 13:59:04.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-5100" for this suite.
Feb  4 13:59:46.932: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 13:59:47.081: INFO: namespace pods-5100 deletion completed in 42.179495875s

• [SLOW TEST:50.919 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  4 13:59:47.082: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb  4 13:59:47.153: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d6515855-ef7c-40d1-90c3-228e0df8b9b7" in namespace "downward-api-7593" to be "success or failure"
Feb  4 13:59:47.159: INFO: Pod "downwardapi-volume-d6515855-ef7c-40d1-90c3-228e0df8b9b7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.279299ms
Feb  4 13:59:49.168: INFO: Pod "downwardapi-volume-d6515855-ef7c-40d1-90c3-228e0df8b9b7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014975543s
Feb  4 13:59:51.180: INFO: Pod "downwardapi-volume-d6515855-ef7c-40d1-90c3-228e0df8b9b7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027378561s
Feb  4 13:59:53.190: INFO: Pod "downwardapi-volume-d6515855-ef7c-40d1-90c3-228e0df8b9b7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.037563865s
Feb  4 13:59:55.236: INFO: Pod "downwardapi-volume-d6515855-ef7c-40d1-90c3-228e0df8b9b7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.082898921s
Feb  4 13:59:57.248: INFO: Pod "downwardapi-volume-d6515855-ef7c-40d1-90c3-228e0df8b9b7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.094763345s
STEP: Saw pod success
Feb  4 13:59:57.248: INFO: Pod "downwardapi-volume-d6515855-ef7c-40d1-90c3-228e0df8b9b7" satisfied condition "success or failure"
Feb  4 13:59:57.252: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-d6515855-ef7c-40d1-90c3-228e0df8b9b7 container client-container: 
STEP: delete the pod
Feb  4 13:59:57.360: INFO: Waiting for pod downwardapi-volume-d6515855-ef7c-40d1-90c3-228e0df8b9b7 to disappear
Feb  4 13:59:57.369: INFO: Pod downwardapi-volume-d6515855-ef7c-40d1-90c3-228e0df8b9b7 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 13:59:57.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7593" for this suite.
Feb  4 14:00:03.418: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 14:00:03.654: INFO: namespace downward-api-7593 deletion completed in 6.266489528s

• [SLOW TEST:16.572 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  4 14:00:03.655: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-secret-shd5
STEP: Creating a pod to test atomic-volume-subpath
Feb  4 14:00:03.808: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-shd5" in namespace "subpath-9367" to be "success or failure"
Feb  4 14:00:03.817: INFO: Pod "pod-subpath-test-secret-shd5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.417396ms
Feb  4 14:00:05.838: INFO: Pod "pod-subpath-test-secret-shd5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02951402s
Feb  4 14:00:07.848: INFO: Pod "pod-subpath-test-secret-shd5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039432536s
Feb  4 14:00:09.865: INFO: Pod "pod-subpath-test-secret-shd5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.057032353s
Feb  4 14:00:11.890: INFO: Pod "pod-subpath-test-secret-shd5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.081483043s
Feb  4 14:00:13.897: INFO: Pod "pod-subpath-test-secret-shd5": Phase="Running", Reason="", readiness=true. Elapsed: 10.089082406s
Feb  4 14:00:15.915: INFO: Pod "pod-subpath-test-secret-shd5": Phase="Running", Reason="", readiness=true. Elapsed: 12.10653127s
Feb  4 14:00:17.924: INFO: Pod "pod-subpath-test-secret-shd5": Phase="Running", Reason="", readiness=true. Elapsed: 14.115965084s
Feb  4 14:00:19.932: INFO: Pod "pod-subpath-test-secret-shd5": Phase="Running", Reason="", readiness=true. Elapsed: 16.124160435s
Feb  4 14:00:21.944: INFO: Pod "pod-subpath-test-secret-shd5": Phase="Running", Reason="", readiness=true. Elapsed: 18.135886577s
Feb  4 14:00:23.958: INFO: Pod "pod-subpath-test-secret-shd5": Phase="Running", Reason="", readiness=true. Elapsed: 20.149237589s
Feb  4 14:00:25.969: INFO: Pod "pod-subpath-test-secret-shd5": Phase="Running", Reason="", readiness=true. Elapsed: 22.160362409s
Feb  4 14:00:27.979: INFO: Pod "pod-subpath-test-secret-shd5": Phase="Running", Reason="", readiness=true. Elapsed: 24.170875964s
Feb  4 14:00:30.023: INFO: Pod "pod-subpath-test-secret-shd5": Phase="Running", Reason="", readiness=true. Elapsed: 26.215003155s
Feb  4 14:00:32.032: INFO: Pod "pod-subpath-test-secret-shd5": Phase="Running", Reason="", readiness=true. Elapsed: 28.223513629s
Feb  4 14:00:34.047: INFO: Pod "pod-subpath-test-secret-shd5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.239158355s
STEP: Saw pod success
Feb  4 14:00:34.048: INFO: Pod "pod-subpath-test-secret-shd5" satisfied condition "success or failure"
Feb  4 14:00:34.077: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-secret-shd5 container test-container-subpath-secret-shd5: 
STEP: delete the pod
Feb  4 14:00:34.223: INFO: Waiting for pod pod-subpath-test-secret-shd5 to disappear
Feb  4 14:00:34.228: INFO: Pod pod-subpath-test-secret-shd5 no longer exists
STEP: Deleting pod pod-subpath-test-secret-shd5
Feb  4 14:00:34.228: INFO: Deleting pod "pod-subpath-test-secret-shd5" in namespace "subpath-9367"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 14:00:34.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-9367" for this suite.
Feb  4 14:00:40.303: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 14:00:40.481: INFO: namespace subpath-9367 deletion completed in 6.246118823s

• [SLOW TEST:36.826 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with secret pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-apps] Job 
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  4 14:00:40.481: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a job
STEP: Ensuring active pods == parallelism
STEP: delete a job
STEP: deleting Job.batch foo in namespace job-9322, will wait for the garbage collector to delete the pods
Feb  4 14:00:50.682: INFO: Deleting Job.batch foo took: 15.569927ms
Feb  4 14:00:50.983: INFO: Terminating Job.batch foo pods took: 300.707048ms
STEP: Ensuring job was deleted
[AfterEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 14:01:36.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-9322" for this suite.
Feb  4 14:01:42.759: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 14:01:42.927: INFO: namespace job-9322 deletion completed in 6.204539661s

• [SLOW TEST:62.446 seconds]
[sig-apps] Job
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  4 14:01:42.928: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Feb  4 14:01:43.004: INFO: Waiting up to 5m0s for pod "downward-api-f9bc00e5-f96c-464e-8a2d-2c96a114de1c" in namespace "downward-api-2845" to be "success or failure"
Feb  4 14:01:43.061: INFO: Pod "downward-api-f9bc00e5-f96c-464e-8a2d-2c96a114de1c": Phase="Pending", Reason="", readiness=false. Elapsed: 56.041287ms
Feb  4 14:01:45.068: INFO: Pod "downward-api-f9bc00e5-f96c-464e-8a2d-2c96a114de1c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063715886s
Feb  4 14:01:47.078: INFO: Pod "downward-api-f9bc00e5-f96c-464e-8a2d-2c96a114de1c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.073199864s
Feb  4 14:01:49.089: INFO: Pod "downward-api-f9bc00e5-f96c-464e-8a2d-2c96a114de1c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.08420342s
Feb  4 14:01:51.097: INFO: Pod "downward-api-f9bc00e5-f96c-464e-8a2d-2c96a114de1c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.092857398s
Feb  4 14:01:53.129: INFO: Pod "downward-api-f9bc00e5-f96c-464e-8a2d-2c96a114de1c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.124790972s
STEP: Saw pod success
Feb  4 14:01:53.130: INFO: Pod "downward-api-f9bc00e5-f96c-464e-8a2d-2c96a114de1c" satisfied condition "success or failure"
Feb  4 14:01:53.139: INFO: Trying to get logs from node iruya-node pod downward-api-f9bc00e5-f96c-464e-8a2d-2c96a114de1c container dapi-container: 
STEP: delete the pod
Feb  4 14:01:53.384: INFO: Waiting for pod downward-api-f9bc00e5-f96c-464e-8a2d-2c96a114de1c to disappear
Feb  4 14:01:53.449: INFO: Pod downward-api-f9bc00e5-f96c-464e-8a2d-2c96a114de1c no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 14:01:53.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2845" for this suite.
Feb  4 14:01:59.486: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 14:01:59.667: INFO: namespace downward-api-2845 deletion completed in 6.202107215s

• [SLOW TEST:16.740 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  4 14:01:59.668: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating pod
Feb  4 14:02:07.817: INFO: Pod pod-hostip-d0a28844-d850-4f4e-99c0-75b4509cbe8e has hostIP: 10.96.3.65
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 14:02:07.817: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-9450" for this suite.
Feb  4 14:02:29.870: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 14:02:30.041: INFO: namespace pods-9450 deletion completed in 22.21652135s

• [SLOW TEST:30.373 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  4 14:02:30.042: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with configMap that has name projected-configmap-test-upd-cf0acc28-bbf7-4fb0-b4cc-3a5971cb7510
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-cf0acc28-bbf7-4fb0-b4cc-3a5971cb7510
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 14:02:42.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-459" for this suite.
Feb  4 14:03:04.573: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 14:03:04.686: INFO: namespace projected-459 deletion completed in 22.239287549s

• [SLOW TEST:34.644 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  4 14:03:04.686: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Feb  4 14:03:04.885: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-3524,SelfLink:/api/v1/namespaces/watch-3524/configmaps/e2e-watch-test-configmap-a,UID:d7e53067-78f7-4c63-956f-247197ff02f3,ResourceVersion:23074999,Generation:0,CreationTimestamp:2020-02-04 14:03:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb  4 14:03:04.886: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-3524,SelfLink:/api/v1/namespaces/watch-3524/configmaps/e2e-watch-test-configmap-a,UID:d7e53067-78f7-4c63-956f-247197ff02f3,ResourceVersion:23074999,Generation:0,CreationTimestamp:2020-02-04 14:03:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Feb  4 14:03:14.902: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-3524,SelfLink:/api/v1/namespaces/watch-3524/configmaps/e2e-watch-test-configmap-a,UID:d7e53067-78f7-4c63-956f-247197ff02f3,ResourceVersion:23075013,Generation:0,CreationTimestamp:2020-02-04 14:03:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Feb  4 14:03:14.903: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-3524,SelfLink:/api/v1/namespaces/watch-3524/configmaps/e2e-watch-test-configmap-a,UID:d7e53067-78f7-4c63-956f-247197ff02f3,ResourceVersion:23075013,Generation:0,CreationTimestamp:2020-02-04 14:03:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Feb  4 14:03:24.923: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-3524,SelfLink:/api/v1/namespaces/watch-3524/configmaps/e2e-watch-test-configmap-a,UID:d7e53067-78f7-4c63-956f-247197ff02f3,ResourceVersion:23075027,Generation:0,CreationTimestamp:2020-02-04 14:03:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb  4 14:03:24.923: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-3524,SelfLink:/api/v1/namespaces/watch-3524/configmaps/e2e-watch-test-configmap-a,UID:d7e53067-78f7-4c63-956f-247197ff02f3,ResourceVersion:23075027,Generation:0,CreationTimestamp:2020-02-04 14:03:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Feb  4 14:03:34.943: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-3524,SelfLink:/api/v1/namespaces/watch-3524/configmaps/e2e-watch-test-configmap-a,UID:d7e53067-78f7-4c63-956f-247197ff02f3,ResourceVersion:23075041,Generation:0,CreationTimestamp:2020-02-04 14:03:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb  4 14:03:34.943: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-3524,SelfLink:/api/v1/namespaces/watch-3524/configmaps/e2e-watch-test-configmap-a,UID:d7e53067-78f7-4c63-956f-247197ff02f3,ResourceVersion:23075041,Generation:0,CreationTimestamp:2020-02-04 14:03:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Feb  4 14:03:44.963: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-3524,SelfLink:/api/v1/namespaces/watch-3524/configmaps/e2e-watch-test-configmap-b,UID:37321cc8-c2f9-4b69-a759-d413683ac267,ResourceVersion:23075055,Generation:0,CreationTimestamp:2020-02-04 14:03:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb  4 14:03:44.964: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-3524,SelfLink:/api/v1/namespaces/watch-3524/configmaps/e2e-watch-test-configmap-b,UID:37321cc8-c2f9-4b69-a759-d413683ac267,ResourceVersion:23075055,Generation:0,CreationTimestamp:2020-02-04 14:03:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Feb  4 14:03:54.980: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-3524,SelfLink:/api/v1/namespaces/watch-3524/configmaps/e2e-watch-test-configmap-b,UID:37321cc8-c2f9-4b69-a759-d413683ac267,ResourceVersion:23075069,Generation:0,CreationTimestamp:2020-02-04 14:03:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb  4 14:03:54.980: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-3524,SelfLink:/api/v1/namespaces/watch-3524/configmaps/e2e-watch-test-configmap-b,UID:37321cc8-c2f9-4b69-a759-d413683ac267,ResourceVersion:23075069,Generation:0,CreationTimestamp:2020-02-04 14:03:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 14:04:04.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-3524" for this suite.
Feb  4 14:04:11.015: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 14:04:11.144: INFO: namespace watch-3524 deletion completed in 6.155210018s

• [SLOW TEST:66.458 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  4 14:04:11.145: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb  4 14:04:11.251: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c6b822d9-6b9b-4396-9d61-c004c08001dd" in namespace "projected-6127" to be "success or failure"
Feb  4 14:04:11.259: INFO: Pod "downwardapi-volume-c6b822d9-6b9b-4396-9d61-c004c08001dd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.13582ms
Feb  4 14:04:13.271: INFO: Pod "downwardapi-volume-c6b822d9-6b9b-4396-9d61-c004c08001dd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019720123s
Feb  4 14:04:15.282: INFO: Pod "downwardapi-volume-c6b822d9-6b9b-4396-9d61-c004c08001dd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030810359s
Feb  4 14:04:17.291: INFO: Pod "downwardapi-volume-c6b822d9-6b9b-4396-9d61-c004c08001dd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.040445974s
Feb  4 14:04:19.299: INFO: Pod "downwardapi-volume-c6b822d9-6b9b-4396-9d61-c004c08001dd": Phase="Running", Reason="", readiness=true. Elapsed: 8.048034852s
Feb  4 14:04:21.308: INFO: Pod "downwardapi-volume-c6b822d9-6b9b-4396-9d61-c004c08001dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.05702937s
STEP: Saw pod success
Feb  4 14:04:21.308: INFO: Pod "downwardapi-volume-c6b822d9-6b9b-4396-9d61-c004c08001dd" satisfied condition "success or failure"
Feb  4 14:04:21.313: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-c6b822d9-6b9b-4396-9d61-c004c08001dd container client-container: 
STEP: delete the pod
Feb  4 14:04:21.379: INFO: Waiting for pod downwardapi-volume-c6b822d9-6b9b-4396-9d61-c004c08001dd to disappear
Feb  4 14:04:21.386: INFO: Pod downwardapi-volume-c6b822d9-6b9b-4396-9d61-c004c08001dd no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 14:04:21.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6127" for this suite.
Feb  4 14:04:27.479: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 14:04:27.659: INFO: namespace projected-6127 deletion completed in 6.268380944s

• [SLOW TEST:16.514 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  4 14:04:27.659: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on node default medium
Feb  4 14:04:28.205: INFO: Waiting up to 5m0s for pod "pod-d40e76d4-6bfa-49b8-8d86-9cc9a20cf3db" in namespace "emptydir-9420" to be "success or failure"
Feb  4 14:04:28.228: INFO: Pod "pod-d40e76d4-6bfa-49b8-8d86-9cc9a20cf3db": Phase="Pending", Reason="", readiness=false. Elapsed: 22.985547ms
Feb  4 14:04:30.251: INFO: Pod "pod-d40e76d4-6bfa-49b8-8d86-9cc9a20cf3db": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045920338s
Feb  4 14:04:32.268: INFO: Pod "pod-d40e76d4-6bfa-49b8-8d86-9cc9a20cf3db": Phase="Pending", Reason="", readiness=false. Elapsed: 4.062312072s
Feb  4 14:04:34.275: INFO: Pod "pod-d40e76d4-6bfa-49b8-8d86-9cc9a20cf3db": Phase="Pending", Reason="", readiness=false. Elapsed: 6.070042654s
Feb  4 14:04:36.286: INFO: Pod "pod-d40e76d4-6bfa-49b8-8d86-9cc9a20cf3db": Phase="Pending", Reason="", readiness=false. Elapsed: 8.080841662s
Feb  4 14:04:38.298: INFO: Pod "pod-d40e76d4-6bfa-49b8-8d86-9cc9a20cf3db": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.092329525s
STEP: Saw pod success
Feb  4 14:04:38.298: INFO: Pod "pod-d40e76d4-6bfa-49b8-8d86-9cc9a20cf3db" satisfied condition "success or failure"
Feb  4 14:04:38.337: INFO: Trying to get logs from node iruya-node pod pod-d40e76d4-6bfa-49b8-8d86-9cc9a20cf3db container test-container: 
STEP: delete the pod
Feb  4 14:04:38.622: INFO: Waiting for pod pod-d40e76d4-6bfa-49b8-8d86-9cc9a20cf3db to disappear
Feb  4 14:04:38.657: INFO: Pod pod-d40e76d4-6bfa-49b8-8d86-9cc9a20cf3db no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 14:04:38.657: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9420" for this suite.
Feb  4 14:04:44.706: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 14:04:44.829: INFO: namespace emptydir-9420 deletion completed in 6.16106883s

• [SLOW TEST:17.170 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  4 14:04:44.830: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Feb  4 14:04:53.602: INFO: Successfully updated pod "pod-update-9d182e15-09ec-43bf-84ab-db7d02266c16"
STEP: verifying the updated pod is in kubernetes
Feb  4 14:04:53.642: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 14:04:53.642: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-1029" for this suite.
Feb  4 14:05:15.696: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 14:05:15.809: INFO: namespace pods-1029 deletion completed in 22.158881304s

• [SLOW TEST:30.979 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  4 14:05:15.809: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-map-44f80e76-c4c6-4161-a86c-10c941dab3f2
STEP: Creating a pod to test consume secrets
Feb  4 14:05:15.890: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-dc7aa271-3cfc-4173-b593-09afd88916ec" in namespace "projected-4957" to be "success or failure"
Feb  4 14:05:15.898: INFO: Pod "pod-projected-secrets-dc7aa271-3cfc-4173-b593-09afd88916ec": Phase="Pending", Reason="", readiness=false. Elapsed: 7.865288ms
Feb  4 14:05:17.906: INFO: Pod "pod-projected-secrets-dc7aa271-3cfc-4173-b593-09afd88916ec": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01581332s
Feb  4 14:05:19.922: INFO: Pod "pod-projected-secrets-dc7aa271-3cfc-4173-b593-09afd88916ec": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032084153s
Feb  4 14:05:21.930: INFO: Pod "pod-projected-secrets-dc7aa271-3cfc-4173-b593-09afd88916ec": Phase="Pending", Reason="", readiness=false. Elapsed: 6.039609035s
Feb  4 14:05:23.940: INFO: Pod "pod-projected-secrets-dc7aa271-3cfc-4173-b593-09afd88916ec": Phase="Pending", Reason="", readiness=false. Elapsed: 8.049985309s
Feb  4 14:05:25.954: INFO: Pod "pod-projected-secrets-dc7aa271-3cfc-4173-b593-09afd88916ec": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.064012397s
STEP: Saw pod success
Feb  4 14:05:25.954: INFO: Pod "pod-projected-secrets-dc7aa271-3cfc-4173-b593-09afd88916ec" satisfied condition "success or failure"
Feb  4 14:05:25.959: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-dc7aa271-3cfc-4173-b593-09afd88916ec container projected-secret-volume-test: 
STEP: delete the pod
Feb  4 14:05:26.043: INFO: Waiting for pod pod-projected-secrets-dc7aa271-3cfc-4173-b593-09afd88916ec to disappear
Feb  4 14:05:26.055: INFO: Pod pod-projected-secrets-dc7aa271-3cfc-4173-b593-09afd88916ec no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 14:05:26.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4957" for this suite.
Feb  4 14:05:32.117: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 14:05:32.255: INFO: namespace projected-4957 deletion completed in 6.188325395s

• [SLOW TEST:16.446 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period 
  should be submitted and removed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  4 14:05:32.256: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Delete Grace Period
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:47
[It] should be submitted and removed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: setting up selector
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
Feb  4 14:05:42.422: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0'
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
Feb  4 14:05:52.581: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 14:05:52.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-6834" for this suite.
Feb  4 14:05:58.628: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 14:05:58.776: INFO: namespace pods-6834 deletion completed in 6.176698396s

• [SLOW TEST:26.521 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  [k8s.io] Delete Grace Period
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be submitted and removed [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run deployment 
  should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  4 14:05:58.777: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1557
[It] should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb  4 14:05:58.882: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/apps.v1 --namespace=kubectl-932'
Feb  4 14:06:01.739: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb  4 14:06:01.740: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the deployment e2e-test-nginx-deployment was created
STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created
[AfterEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1562
Feb  4 14:06:03.914: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-932'
Feb  4 14:06:04.057: INFO: stderr: ""
Feb  4 14:06:04.057: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 14:06:04.057: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-932" for this suite.
Feb  4 14:06:10.468: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 14:06:10.700: INFO: namespace kubectl-932 deletion completed in 6.28797312s

• [SLOW TEST:11.923 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-api-machinery] Aggregator 
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  4 14:06:10.701: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename aggregator
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76
Feb  4 14:06:10.794: INFO: >>> kubeConfig: /root/.kube/config
[It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Registering the sample API server.
Feb  4 14:06:11.192: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set
Feb  4 14:06:13.364: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716421971, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716421971, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716421971, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716421971, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  4 14:06:15.372: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716421971, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716421971, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716421971, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716421971, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  4 14:06:17.371: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716421971, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716421971, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716421971, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716421971, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  4 14:06:19.373: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716421971, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716421971, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716421971, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716421971, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  4 14:06:21.375: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716421971, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716421971, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716421971, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716421971, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  4 14:06:27.799: INFO: Waited 4.41525255s for the sample-apiserver to be ready to handle requests.
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 14:06:28.553: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "aggregator-5042" for this suite.
Feb  4 14:06:34.605: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 14:06:34.719: INFO: namespace aggregator-5042 deletion completed in 6.155037775s

• [SLOW TEST:24.019 seconds]
[sig-api-machinery] Aggregator
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  4 14:06:34.722: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test externalName service
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6626.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-6626.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6626.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-6626.svc.cluster.local; sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb  4 14:06:49.196: INFO: File jessie_udp@dns-test-service-3.dns-6626.svc.cluster.local from pod  dns-6626/dns-test-b9ab70ae-29b5-43b8-88ec-c2b8d0ad8ad9 contains '' instead of 'foo.example.com.'
Feb  4 14:06:49.196: INFO: Lookups using dns-6626/dns-test-b9ab70ae-29b5-43b8-88ec-c2b8d0ad8ad9 failed for: [jessie_udp@dns-test-service-3.dns-6626.svc.cluster.local]

Feb  4 14:06:54.212: INFO: DNS probes using dns-test-b9ab70ae-29b5-43b8-88ec-c2b8d0ad8ad9 succeeded

STEP: deleting the pod
STEP: changing the externalName to bar.example.com
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6626.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-6626.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6626.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-6626.svc.cluster.local; sleep 1; done

STEP: creating a second pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb  4 14:07:10.465: INFO: File wheezy_udp@dns-test-service-3.dns-6626.svc.cluster.local from pod  dns-6626/dns-test-db5647b9-3734-45f7-96fc-2f90501343e0 contains '' instead of 'bar.example.com.'
Feb  4 14:07:10.479: INFO: File jessie_udp@dns-test-service-3.dns-6626.svc.cluster.local from pod  dns-6626/dns-test-db5647b9-3734-45f7-96fc-2f90501343e0 contains '' instead of 'bar.example.com.'
Feb  4 14:07:10.479: INFO: Lookups using dns-6626/dns-test-db5647b9-3734-45f7-96fc-2f90501343e0 failed for: [wheezy_udp@dns-test-service-3.dns-6626.svc.cluster.local jessie_udp@dns-test-service-3.dns-6626.svc.cluster.local]

Feb  4 14:07:15.505: INFO: File wheezy_udp@dns-test-service-3.dns-6626.svc.cluster.local from pod  dns-6626/dns-test-db5647b9-3734-45f7-96fc-2f90501343e0 contains 'foo.example.com.
' instead of 'bar.example.com.'
Feb  4 14:07:15.521: INFO: File jessie_udp@dns-test-service-3.dns-6626.svc.cluster.local from pod  dns-6626/dns-test-db5647b9-3734-45f7-96fc-2f90501343e0 contains 'foo.example.com.
' instead of 'bar.example.com.'
Feb  4 14:07:15.522: INFO: Lookups using dns-6626/dns-test-db5647b9-3734-45f7-96fc-2f90501343e0 failed for: [wheezy_udp@dns-test-service-3.dns-6626.svc.cluster.local jessie_udp@dns-test-service-3.dns-6626.svc.cluster.local]

Feb  4 14:07:20.507: INFO: DNS probes using dns-test-db5647b9-3734-45f7-96fc-2f90501343e0 succeeded

STEP: deleting the pod
STEP: changing the service to type=ClusterIP
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6626.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-6626.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6626.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-6626.svc.cluster.local; sleep 1; done

STEP: creating a third pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb  4 14:07:36.760: INFO: File wheezy_udp@dns-test-service-3.dns-6626.svc.cluster.local from pod  dns-6626/dns-test-a91d8628-9191-4516-8ac9-11a6819cd93d contains '' instead of '10.101.202.82'
Feb  4 14:07:36.767: INFO: File jessie_udp@dns-test-service-3.dns-6626.svc.cluster.local from pod  dns-6626/dns-test-a91d8628-9191-4516-8ac9-11a6819cd93d contains '' instead of '10.101.202.82'
Feb  4 14:07:36.767: INFO: Lookups using dns-6626/dns-test-a91d8628-9191-4516-8ac9-11a6819cd93d failed for: [wheezy_udp@dns-test-service-3.dns-6626.svc.cluster.local jessie_udp@dns-test-service-3.dns-6626.svc.cluster.local]

Feb  4 14:07:41.809: INFO: DNS probes using dns-test-a91d8628-9191-4516-8ac9-11a6819cd93d succeeded

STEP: deleting the pod
STEP: deleting the test externalName service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 14:07:42.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-6626" for this suite.
Feb  4 14:07:48.179: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 14:07:48.337: INFO: namespace dns-6626 deletion completed in 6.220158182s

• [SLOW TEST:73.615 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  4 14:07:48.338: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a replication controller
Feb  4 14:07:48.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8135'
Feb  4 14:07:48.831: INFO: stderr: ""
Feb  4 14:07:48.831: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb  4 14:07:48.831: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8135'
Feb  4 14:07:49.107: INFO: stderr: ""
Feb  4 14:07:49.107: INFO: stdout: ""
STEP: Replicas for name=update-demo: expected=2 actual=0
Feb  4 14:07:54.108: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8135'
Feb  4 14:07:55.311: INFO: stderr: ""
Feb  4 14:07:55.311: INFO: stdout: "update-demo-nautilus-hp9sd update-demo-nautilus-xldzz "
Feb  4 14:07:55.312: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hp9sd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8135'
Feb  4 14:07:56.357: INFO: stderr: ""
Feb  4 14:07:56.357: INFO: stdout: ""
Feb  4 14:07:56.357: INFO: update-demo-nautilus-hp9sd is created but not running
Feb  4 14:08:01.358: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8135'
Feb  4 14:08:01.504: INFO: stderr: ""
Feb  4 14:08:01.504: INFO: stdout: "update-demo-nautilus-hp9sd update-demo-nautilus-xldzz "
Feb  4 14:08:01.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hp9sd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8135'
Feb  4 14:08:01.631: INFO: stderr: ""
Feb  4 14:08:01.631: INFO: stdout: "true"
Feb  4 14:08:01.631: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hp9sd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8135'
Feb  4 14:08:01.752: INFO: stderr: ""
Feb  4 14:08:01.753: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  4 14:08:01.753: INFO: validating pod update-demo-nautilus-hp9sd
Feb  4 14:08:01.766: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  4 14:08:01.766: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  4 14:08:01.766: INFO: update-demo-nautilus-hp9sd is verified up and running
Feb  4 14:08:01.766: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xldzz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8135'
Feb  4 14:08:01.884: INFO: stderr: ""
Feb  4 14:08:01.884: INFO: stdout: "true"
Feb  4 14:08:01.884: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xldzz -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8135'
Feb  4 14:08:01.975: INFO: stderr: ""
Feb  4 14:08:01.976: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  4 14:08:01.976: INFO: validating pod update-demo-nautilus-xldzz
Feb  4 14:08:01.992: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  4 14:08:01.992: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  4 14:08:01.992: INFO: update-demo-nautilus-xldzz is verified up and running
STEP: scaling down the replication controller
Feb  4 14:08:01.994: INFO: scanned /root for discovery docs: 
Feb  4 14:08:01.994: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-8135'
Feb  4 14:08:03.119: INFO: stderr: ""
Feb  4 14:08:03.119: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb  4 14:08:03.119: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8135'
Feb  4 14:08:03.235: INFO: stderr: ""
Feb  4 14:08:03.235: INFO: stdout: "update-demo-nautilus-hp9sd update-demo-nautilus-xldzz "
STEP: Replicas for name=update-demo: expected=1 actual=2
Feb  4 14:08:08.235: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8135'
Feb  4 14:08:08.373: INFO: stderr: ""
Feb  4 14:08:08.373: INFO: stdout: "update-demo-nautilus-hp9sd update-demo-nautilus-xldzz "
STEP: Replicas for name=update-demo: expected=1 actual=2
Feb  4 14:08:13.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8135'
Feb  4 14:08:13.550: INFO: stderr: ""
Feb  4 14:08:13.551: INFO: stdout: "update-demo-nautilus-hp9sd update-demo-nautilus-xldzz "
STEP: Replicas for name=update-demo: expected=1 actual=2
Feb  4 14:08:18.552: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8135'
Feb  4 14:08:18.693: INFO: stderr: ""
Feb  4 14:08:18.694: INFO: stdout: "update-demo-nautilus-hp9sd "
Feb  4 14:08:18.694: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hp9sd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8135'
Feb  4 14:08:18.784: INFO: stderr: ""
Feb  4 14:08:18.784: INFO: stdout: "true"
Feb  4 14:08:18.785: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hp9sd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8135'
Feb  4 14:08:18.895: INFO: stderr: ""
Feb  4 14:08:18.895: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  4 14:08:18.895: INFO: validating pod update-demo-nautilus-hp9sd
Feb  4 14:08:18.901: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  4 14:08:18.901: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  4 14:08:18.901: INFO: update-demo-nautilus-hp9sd is verified up and running
STEP: scaling up the replication controller
Feb  4 14:08:18.905: INFO: scanned /root for discovery docs: 
Feb  4 14:08:18.906: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-8135'
Feb  4 14:08:20.111: INFO: stderr: ""
Feb  4 14:08:20.111: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb  4 14:08:20.111: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8135'
Feb  4 14:08:20.223: INFO: stderr: ""
Feb  4 14:08:20.223: INFO: stdout: "update-demo-nautilus-7zzjr update-demo-nautilus-hp9sd "
Feb  4 14:08:20.223: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7zzjr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8135'
Feb  4 14:08:20.377: INFO: stderr: ""
Feb  4 14:08:20.377: INFO: stdout: ""
Feb  4 14:08:20.377: INFO: update-demo-nautilus-7zzjr is created but not running
Feb  4 14:08:25.378: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8135'
Feb  4 14:08:25.564: INFO: stderr: ""
Feb  4 14:08:25.564: INFO: stdout: "update-demo-nautilus-7zzjr update-demo-nautilus-hp9sd "
Feb  4 14:08:25.564: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7zzjr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8135'
Feb  4 14:08:25.647: INFO: stderr: ""
Feb  4 14:08:25.647: INFO: stdout: ""
Feb  4 14:08:25.647: INFO: update-demo-nautilus-7zzjr is created but not running
Feb  4 14:08:30.647: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8135'
Feb  4 14:08:30.738: INFO: stderr: ""
Feb  4 14:08:30.738: INFO: stdout: "update-demo-nautilus-7zzjr update-demo-nautilus-hp9sd "
Feb  4 14:08:30.738: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7zzjr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8135'
Feb  4 14:08:30.818: INFO: stderr: ""
Feb  4 14:08:30.818: INFO: stdout: "true"
Feb  4 14:08:30.818: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7zzjr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8135'
Feb  4 14:08:30.894: INFO: stderr: ""
Feb  4 14:08:30.895: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  4 14:08:30.895: INFO: validating pod update-demo-nautilus-7zzjr
Feb  4 14:08:30.908: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  4 14:08:30.909: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  4 14:08:30.909: INFO: update-demo-nautilus-7zzjr is verified up and running
Feb  4 14:08:30.909: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hp9sd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8135'
Feb  4 14:08:31.010: INFO: stderr: ""
Feb  4 14:08:31.010: INFO: stdout: "true"
Feb  4 14:08:31.010: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hp9sd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8135'
Feb  4 14:08:31.103: INFO: stderr: ""
Feb  4 14:08:31.103: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  4 14:08:31.103: INFO: validating pod update-demo-nautilus-hp9sd
Feb  4 14:08:31.111: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  4 14:08:31.111: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  4 14:08:31.111: INFO: update-demo-nautilus-hp9sd is verified up and running
STEP: using delete to clean up resources
Feb  4 14:08:31.111: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8135'
Feb  4 14:08:31.216: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  4 14:08:31.216: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Feb  4 14:08:31.217: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-8135'
Feb  4 14:08:31.319: INFO: stderr: "No resources found.\n"
Feb  4 14:08:31.319: INFO: stdout: ""
Feb  4 14:08:31.320: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-8135 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb  4 14:08:31.419: INFO: stderr: ""
Feb  4 14:08:31.419: INFO: stdout: "update-demo-nautilus-7zzjr\nupdate-demo-nautilus-hp9sd\n"
Feb  4 14:08:31.920: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-8135'
Feb  4 14:08:33.025: INFO: stderr: "No resources found.\n"
Feb  4 14:08:33.025: INFO: stdout: ""
Feb  4 14:08:33.026: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-8135 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb  4 14:08:33.291: INFO: stderr: ""
Feb  4 14:08:33.291: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 14:08:33.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8135" for this suite.
Feb  4 14:08:55.367: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 14:08:55.554: INFO: namespace kubectl-8135 deletion completed in 22.251171073s

• [SLOW TEST:67.217 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should scale a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  4 14:08:55.556: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap configmap-819/configmap-test-b7a0edd7-14d4-4db1-aadd-5621293d57d6
STEP: Creating a pod to test consume configMaps
Feb  4 14:08:55.674: INFO: Waiting up to 5m0s for pod "pod-configmaps-c6e90cc1-801a-460f-a089-909d6d4f57fd" in namespace "configmap-819" to be "success or failure"
Feb  4 14:08:55.703: INFO: Pod "pod-configmaps-c6e90cc1-801a-460f-a089-909d6d4f57fd": Phase="Pending", Reason="", readiness=false. Elapsed: 28.948331ms
Feb  4 14:08:57.715: INFO: Pod "pod-configmaps-c6e90cc1-801a-460f-a089-909d6d4f57fd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040833095s
Feb  4 14:08:59.723: INFO: Pod "pod-configmaps-c6e90cc1-801a-460f-a089-909d6d4f57fd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.048353582s
Feb  4 14:09:01.732: INFO: Pod "pod-configmaps-c6e90cc1-801a-460f-a089-909d6d4f57fd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.057338266s
Feb  4 14:09:03.741: INFO: Pod "pod-configmaps-c6e90cc1-801a-460f-a089-909d6d4f57fd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.066267205s
Feb  4 14:09:06.332: INFO: Pod "pod-configmaps-c6e90cc1-801a-460f-a089-909d6d4f57fd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.657978467s
STEP: Saw pod success
Feb  4 14:09:06.333: INFO: Pod "pod-configmaps-c6e90cc1-801a-460f-a089-909d6d4f57fd" satisfied condition "success or failure"
Feb  4 14:09:06.345: INFO: Trying to get logs from node iruya-node pod pod-configmaps-c6e90cc1-801a-460f-a089-909d6d4f57fd container env-test: 
STEP: delete the pod
Feb  4 14:09:06.559: INFO: Waiting for pod pod-configmaps-c6e90cc1-801a-460f-a089-909d6d4f57fd to disappear
Feb  4 14:09:06.639: INFO: Pod pod-configmaps-c6e90cc1-801a-460f-a089-909d6d4f57fd no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 14:09:06.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-819" for this suite.
Feb  4 14:09:12.681: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 14:09:12.788: INFO: namespace configmap-819 deletion completed in 6.137086473s

• [SLOW TEST:17.232 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  4 14:09:12.788: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb  4 14:09:12.871: INFO: (0) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 28.969189ms)
Feb  4 14:09:12.896: INFO: (1) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 24.865041ms)
Feb  4 14:09:12.902: INFO: (2) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.724235ms)
Feb  4 14:09:12.908: INFO: (3) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.837159ms)
Feb  4 14:09:12.915: INFO: (4) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.326547ms)
Feb  4 14:09:12.919: INFO: (5) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.051361ms)
Feb  4 14:09:12.925: INFO: (6) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.449473ms)
Feb  4 14:09:12.928: INFO: (7) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.581592ms)
Feb  4 14:09:12.933: INFO: (8) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.633206ms)
Feb  4 14:09:12.937: INFO: (9) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.60018ms)
Feb  4 14:09:12.942: INFO: (10) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.514719ms)
Feb  4 14:09:12.947: INFO: (11) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.247932ms)
Feb  4 14:09:12.952: INFO: (12) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.706965ms)
Feb  4 14:09:12.958: INFO: (13) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.913838ms)
Feb  4 14:09:12.966: INFO: (14) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.080213ms)
Feb  4 14:09:12.991: INFO: (15) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 24.235332ms)
Feb  4 14:09:13.005: INFO: (16) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 13.956896ms)
Feb  4 14:09:13.022: INFO: (17) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 17.058478ms)
Feb  4 14:09:13.030: INFO: (18) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.139796ms)
Feb  4 14:09:13.038: INFO: (19) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.718432ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 14:09:13.038: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-930" for this suite.
Feb  4 14:09:19.085: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 14:09:19.229: INFO: namespace proxy-930 deletion completed in 6.185880286s

• [SLOW TEST:6.441 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  4 14:09:19.229: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test hostPath mode
Feb  4 14:09:19.375: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-9934" to be "success or failure"
Feb  4 14:09:19.382: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 7.613772ms
Feb  4 14:09:21.390: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015034975s
Feb  4 14:09:23.403: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028011565s
Feb  4 14:09:25.415: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.039861683s
Feb  4 14:09:27.430: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.055237443s
Feb  4 14:09:29.439: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 10.063844336s
Feb  4 14:09:31.475: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.100477068s
STEP: Saw pod success
Feb  4 14:09:31.476: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
Feb  4 14:09:31.485: INFO: Trying to get logs from node iruya-node pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Feb  4 14:09:31.557: INFO: Waiting for pod pod-host-path-test to disappear
Feb  4 14:09:31.636: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 14:09:31.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "hostpath-9934" for this suite.
Feb  4 14:09:39.671: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 14:09:39.877: INFO: namespace hostpath-9934 deletion completed in 8.230781569s

• [SLOW TEST:20.647 seconds]
[sig-storage] HostPath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  4 14:09:39.877: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name cm-test-opt-del-0fd858ff-7e3f-43cc-b78a-77ef33395aae
STEP: Creating configMap with name cm-test-opt-upd-b4269657-7470-47d9-8570-5d99b46fbc30
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-0fd858ff-7e3f-43cc-b78a-77ef33395aae
STEP: Updating configmap cm-test-opt-upd-b4269657-7470-47d9-8570-5d99b46fbc30
STEP: Creating configMap with name cm-test-opt-create-2ed292d8-b54c-4ade-b7b7-a0328117fbb7
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 14:11:14.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4349" for this suite.
Feb  4 14:11:36.607: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 14:11:36.687: INFO: namespace projected-4349 deletion completed in 22.103271162s

• [SLOW TEST:116.810 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  4 14:11:36.687: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-9b6de990-9df2-4cfd-9b52-0e3d548ea962
STEP: Creating a pod to test consume configMaps
Feb  4 14:11:36.777: INFO: Waiting up to 5m0s for pod "pod-configmaps-e70c627b-f498-4215-a94f-74c08c88bec5" in namespace "configmap-4438" to be "success or failure"
Feb  4 14:11:36.782: INFO: Pod "pod-configmaps-e70c627b-f498-4215-a94f-74c08c88bec5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.472145ms
Feb  4 14:11:38.800: INFO: Pod "pod-configmaps-e70c627b-f498-4215-a94f-74c08c88bec5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022435795s
Feb  4 14:11:40.808: INFO: Pod "pod-configmaps-e70c627b-f498-4215-a94f-74c08c88bec5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030938426s
Feb  4 14:11:42.822: INFO: Pod "pod-configmaps-e70c627b-f498-4215-a94f-74c08c88bec5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.044935772s
Feb  4 14:11:44.837: INFO: Pod "pod-configmaps-e70c627b-f498-4215-a94f-74c08c88bec5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.05910179s
STEP: Saw pod success
Feb  4 14:11:44.837: INFO: Pod "pod-configmaps-e70c627b-f498-4215-a94f-74c08c88bec5" satisfied condition "success or failure"
Feb  4 14:11:44.842: INFO: Trying to get logs from node iruya-node pod pod-configmaps-e70c627b-f498-4215-a94f-74c08c88bec5 container configmap-volume-test: 
STEP: delete the pod
Feb  4 14:11:44.929: INFO: Waiting for pod pod-configmaps-e70c627b-f498-4215-a94f-74c08c88bec5 to disappear
Feb  4 14:11:44.949: INFO: Pod pod-configmaps-e70c627b-f498-4215-a94f-74c08c88bec5 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 14:11:44.949: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4438" for this suite.
Feb  4 14:11:51.014: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 14:11:51.135: INFO: namespace configmap-4438 deletion completed in 6.18028442s

• [SLOW TEST:14.448 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  4 14:11:51.136: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 14:12:01.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-2950" for this suite.
Feb  4 14:12:53.393: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 14:12:53.539: INFO: namespace kubelet-test-2950 deletion completed in 52.216529636s

• [SLOW TEST:62.403 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox Pod with hostAliases
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136
    should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  4 14:12:53.540: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Feb  4 14:12:53.747: INFO: Waiting up to 5m0s for pod "downward-api-a9fc880a-32bb-4a4b-b891-13d4dcd8b418" in namespace "downward-api-1424" to be "success or failure"
Feb  4 14:12:53.758: INFO: Pod "downward-api-a9fc880a-32bb-4a4b-b891-13d4dcd8b418": Phase="Pending", Reason="", readiness=false. Elapsed: 11.542112ms
Feb  4 14:12:55.773: INFO: Pod "downward-api-a9fc880a-32bb-4a4b-b891-13d4dcd8b418": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02592591s
Feb  4 14:12:57.840: INFO: Pod "downward-api-a9fc880a-32bb-4a4b-b891-13d4dcd8b418": Phase="Pending", Reason="", readiness=false. Elapsed: 4.092955384s
Feb  4 14:12:59.879: INFO: Pod "downward-api-a9fc880a-32bb-4a4b-b891-13d4dcd8b418": Phase="Pending", Reason="", readiness=false. Elapsed: 6.132334021s
Feb  4 14:13:01.886: INFO: Pod "downward-api-a9fc880a-32bb-4a4b-b891-13d4dcd8b418": Phase="Pending", Reason="", readiness=false. Elapsed: 8.139683841s
Feb  4 14:13:03.895: INFO: Pod "downward-api-a9fc880a-32bb-4a4b-b891-13d4dcd8b418": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.148758201s
STEP: Saw pod success
Feb  4 14:13:03.896: INFO: Pod "downward-api-a9fc880a-32bb-4a4b-b891-13d4dcd8b418" satisfied condition "success or failure"
Feb  4 14:13:03.898: INFO: Trying to get logs from node iruya-node pod downward-api-a9fc880a-32bb-4a4b-b891-13d4dcd8b418 container dapi-container: 
STEP: delete the pod
Feb  4 14:13:04.076: INFO: Waiting for pod downward-api-a9fc880a-32bb-4a4b-b891-13d4dcd8b418 to disappear
Feb  4 14:13:04.081: INFO: Pod downward-api-a9fc880a-32bb-4a4b-b891-13d4dcd8b418 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 14:13:04.081: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1424" for this suite.
Feb  4 14:13:10.122: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 14:13:10.264: INFO: namespace downward-api-1424 deletion completed in 6.175721637s

• [SLOW TEST:16.724 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  4 14:13:10.264: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-3658ae6d-4ca5-413a-a2ab-2b442a77ae65
STEP: Creating a pod to test consume configMaps
Feb  4 14:13:10.405: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-fad38001-1e36-49ef-b1f8-d62917e015cf" in namespace "projected-8447" to be "success or failure"
Feb  4 14:13:10.439: INFO: Pod "pod-projected-configmaps-fad38001-1e36-49ef-b1f8-d62917e015cf": Phase="Pending", Reason="", readiness=false. Elapsed: 34.264347ms
Feb  4 14:13:12.459: INFO: Pod "pod-projected-configmaps-fad38001-1e36-49ef-b1f8-d62917e015cf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054225588s
Feb  4 14:13:14.470: INFO: Pod "pod-projected-configmaps-fad38001-1e36-49ef-b1f8-d62917e015cf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.065247001s
Feb  4 14:13:16.490: INFO: Pod "pod-projected-configmaps-fad38001-1e36-49ef-b1f8-d62917e015cf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.084636146s
Feb  4 14:13:18.505: INFO: Pod "pod-projected-configmaps-fad38001-1e36-49ef-b1f8-d62917e015cf": Phase="Pending", Reason="", readiness=false. Elapsed: 8.099738853s
Feb  4 14:13:20.905: INFO: Pod "pod-projected-configmaps-fad38001-1e36-49ef-b1f8-d62917e015cf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.499689103s
STEP: Saw pod success
Feb  4 14:13:20.905: INFO: Pod "pod-projected-configmaps-fad38001-1e36-49ef-b1f8-d62917e015cf" satisfied condition "success or failure"
Feb  4 14:13:21.166: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-fad38001-1e36-49ef-b1f8-d62917e015cf container projected-configmap-volume-test: 
STEP: delete the pod
Feb  4 14:13:21.240: INFO: Waiting for pod pod-projected-configmaps-fad38001-1e36-49ef-b1f8-d62917e015cf to disappear
Feb  4 14:13:21.248: INFO: Pod pod-projected-configmaps-fad38001-1e36-49ef-b1f8-d62917e015cf no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 14:13:21.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8447" for this suite.
Feb  4 14:13:27.313: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 14:13:27.477: INFO: namespace projected-8447 deletion completed in 6.220891176s

• [SLOW TEST:17.213 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  4 14:13:27.477: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Feb  4 14:13:36.695: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 14:13:37.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-6192" for this suite.
Feb  4 14:14:01.784: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 14:14:01.955: INFO: namespace replicaset-6192 deletion completed in 24.205969226s

• [SLOW TEST:34.478 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  4 14:14:01.955: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-upd-e1183e3f-0bd5-416f-87f3-aeee330975b1
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-e1183e3f-0bd5-416f-87f3-aeee330975b1
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 14:15:13.590: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1692" for this suite.
Feb  4 14:15:35.691: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 14:15:35.891: INFO: namespace configmap-1692 deletion completed in 22.291871658s

• [SLOW TEST:93.936 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  4 14:15:35.892: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting the proxy server
Feb  4 14:15:36.048: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 14:15:36.162: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5735" for this suite.
Feb  4 14:15:42.204: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 14:15:42.317: INFO: namespace kubectl-5735 deletion completed in 6.142010716s

• [SLOW TEST:6.425 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support proxy with --port 0  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  4 14:15:42.318: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:167
[It] should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating server pod server in namespace prestop-7788
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace prestop-7788
STEP: Deleting pre-stop pod
Feb  4 14:16:03.622: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 14:16:03.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "prestop-7788" for this suite.
Feb  4 14:16:49.684: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 14:16:49.865: INFO: namespace prestop-7788 deletion completed in 46.214079237s

• [SLOW TEST:67.547 seconds]
[k8s.io] [sig-node] PreStop
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  4 14:16:49.865: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb  4 14:16:50.007: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bc2b58ef-f520-4efa-8c10-954d165b124d" in namespace "downward-api-1338" to be "success or failure"
Feb  4 14:16:50.021: INFO: Pod "downwardapi-volume-bc2b58ef-f520-4efa-8c10-954d165b124d": Phase="Pending", Reason="", readiness=false. Elapsed: 13.867374ms
Feb  4 14:16:52.031: INFO: Pod "downwardapi-volume-bc2b58ef-f520-4efa-8c10-954d165b124d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023659005s
Feb  4 14:16:54.046: INFO: Pod "downwardapi-volume-bc2b58ef-f520-4efa-8c10-954d165b124d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038861731s
Feb  4 14:16:56.057: INFO: Pod "downwardapi-volume-bc2b58ef-f520-4efa-8c10-954d165b124d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.05006923s
Feb  4 14:16:58.066: INFO: Pod "downwardapi-volume-bc2b58ef-f520-4efa-8c10-954d165b124d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.058906383s
Feb  4 14:17:00.075: INFO: Pod "downwardapi-volume-bc2b58ef-f520-4efa-8c10-954d165b124d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.068178051s
STEP: Saw pod success
Feb  4 14:17:00.075: INFO: Pod "downwardapi-volume-bc2b58ef-f520-4efa-8c10-954d165b124d" satisfied condition "success or failure"
Feb  4 14:17:00.080: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-bc2b58ef-f520-4efa-8c10-954d165b124d container client-container: 
STEP: delete the pod
Feb  4 14:17:00.519: INFO: Waiting for pod downwardapi-volume-bc2b58ef-f520-4efa-8c10-954d165b124d to disappear
Feb  4 14:17:00.536: INFO: Pod downwardapi-volume-bc2b58ef-f520-4efa-8c10-954d165b124d no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 14:17:00.536: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1338" for this suite.
Feb  4 14:17:06.674: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 14:17:06.801: INFO: namespace downward-api-1338 deletion completed in 6.258234488s

• [SLOW TEST:16.937 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  4 14:17:06.802: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Feb  4 14:17:06.875: INFO: Waiting up to 5m0s for pod "downward-api-6607cd79-4000-44b6-88ba-7a314c6316e7" in namespace "downward-api-8274" to be "success or failure"
Feb  4 14:17:06.885: INFO: Pod "downward-api-6607cd79-4000-44b6-88ba-7a314c6316e7": Phase="Pending", Reason="", readiness=false. Elapsed: 9.861811ms
Feb  4 14:17:08.900: INFO: Pod "downward-api-6607cd79-4000-44b6-88ba-7a314c6316e7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024733725s
Feb  4 14:17:10.911: INFO: Pod "downward-api-6607cd79-4000-44b6-88ba-7a314c6316e7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036004955s
Feb  4 14:17:12.918: INFO: Pod "downward-api-6607cd79-4000-44b6-88ba-7a314c6316e7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.043392671s
Feb  4 14:17:14.926: INFO: Pod "downward-api-6607cd79-4000-44b6-88ba-7a314c6316e7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.051150852s
STEP: Saw pod success
Feb  4 14:17:14.926: INFO: Pod "downward-api-6607cd79-4000-44b6-88ba-7a314c6316e7" satisfied condition "success or failure"
Feb  4 14:17:14.929: INFO: Trying to get logs from node iruya-node pod downward-api-6607cd79-4000-44b6-88ba-7a314c6316e7 container dapi-container: 
STEP: delete the pod
Feb  4 14:17:14.979: INFO: Waiting for pod downward-api-6607cd79-4000-44b6-88ba-7a314c6316e7 to disappear
Feb  4 14:17:14.985: INFO: Pod downward-api-6607cd79-4000-44b6-88ba-7a314c6316e7 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 14:17:14.985: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8274" for this suite.
Feb  4 14:17:21.015: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 14:17:21.098: INFO: namespace downward-api-8274 deletion completed in 6.106310455s

• [SLOW TEST:14.296 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  4 14:17:21.099: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name cm-test-opt-del-e89bdfb4-e5c5-4ac1-9b2e-aa963d566941
STEP: Creating configMap with name cm-test-opt-upd-24ef4ffa-f6e1-41fb-8f84-bf4a496fe7ba
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-e89bdfb4-e5c5-4ac1-9b2e-aa963d566941
STEP: Updating configmap cm-test-opt-upd-24ef4ffa-f6e1-41fb-8f84-bf4a496fe7ba
STEP: Creating configMap with name cm-test-opt-create-2e9007d1-ff8e-42e8-b501-59f5be3f3f71
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 14:17:35.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8109" for this suite.
Feb  4 14:17:57.671: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 14:17:57.832: INFO: namespace configmap-8109 deletion completed in 22.198764278s

• [SLOW TEST:36.733 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  4 14:17:57.832: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Starting the proxy
Feb  4 14:17:57.912: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix389383621/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 14:17:57.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-957" for this suite.
Feb  4 14:18:04.370: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 14:18:04.529: INFO: namespace kubectl-957 deletion completed in 6.217995768s

• [SLOW TEST:6.697 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support --unix-socket=/path  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  4 14:18:04.530: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs
STEP: Gathering metrics
W0204 14:18:35.176212       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb  4 14:18:35.176: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 14:18:35.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-6310" for this suite.
Feb  4 14:18:41.209: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 14:18:41.303: INFO: namespace gc-6310 deletion completed in 6.121833195s

• [SLOW TEST:36.773 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  4 14:18:41.303: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb  4 14:18:42.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7521'
Feb  4 14:18:46.600: INFO: stderr: ""
Feb  4 14:18:46.601: INFO: stdout: "replicationcontroller/redis-master created\n"
Feb  4 14:18:46.602: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7521'
Feb  4 14:18:47.207: INFO: stderr: ""
Feb  4 14:18:47.207: INFO: stdout: "service/redis-master created\n"
STEP: Waiting for Redis master to start.
Feb  4 14:18:48.217: INFO: Selector matched 1 pods for map[app:redis]
Feb  4 14:18:48.217: INFO: Found 0 / 1
Feb  4 14:18:49.226: INFO: Selector matched 1 pods for map[app:redis]
Feb  4 14:18:49.227: INFO: Found 0 / 1
Feb  4 14:18:50.224: INFO: Selector matched 1 pods for map[app:redis]
Feb  4 14:18:50.225: INFO: Found 0 / 1
Feb  4 14:18:51.218: INFO: Selector matched 1 pods for map[app:redis]
Feb  4 14:18:51.218: INFO: Found 0 / 1
Feb  4 14:18:52.217: INFO: Selector matched 1 pods for map[app:redis]
Feb  4 14:18:52.217: INFO: Found 0 / 1
Feb  4 14:18:53.216: INFO: Selector matched 1 pods for map[app:redis]
Feb  4 14:18:53.216: INFO: Found 1 / 1
Feb  4 14:18:53.216: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Feb  4 14:18:53.220: INFO: Selector matched 1 pods for map[app:redis]
Feb  4 14:18:53.220: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Feb  4 14:18:53.220: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-5ml78 --namespace=kubectl-7521'
Feb  4 14:18:53.370: INFO: stderr: ""
Feb  4 14:18:53.370: INFO: stdout: "Name:           redis-master-5ml78\nNamespace:      kubectl-7521\nPriority:       0\nNode:           iruya-node/10.96.3.65\nStart Time:     Tue, 04 Feb 2020 14:18:46 +0000\nLabels:         app=redis\n                role=master\nAnnotations:    \nStatus:         Running\nIP:             10.44.0.1\nControlled By:  ReplicationController/redis-master\nContainers:\n  redis-master:\n    Container ID:   docker://0bb2482c697d9d2457ee909f5c82c71d725bf36d1363bd437aad4ebe370d04f9\n    Image:          gcr.io/kubernetes-e2e-test-images/redis:1.0\n    Image ID:       docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Tue, 04 Feb 2020 14:18:52 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-pbmcq (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-pbmcq:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-pbmcq\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason     Age   From                 Message\n  ----    ------     ----  ----                 -------\n  Normal  Scheduled  7s    default-scheduler    Successfully assigned kubectl-7521/redis-master-5ml78 to iruya-node\n  Normal  Pulled     3s    kubelet, iruya-node  Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n  Normal  Created    1s    kubelet, iruya-node  Created container redis-master\n  Normal  Started    1s    kubelet, iruya-node  Started container redis-master\n"
Feb  4 14:18:53.371: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=kubectl-7521'
Feb  4 14:18:53.502: INFO: stderr: ""
Feb  4 14:18:53.502: INFO: stdout: "Name:         redis-master\nNamespace:    kubectl-7521\nSelector:     app=redis,role=master\nLabels:       app=redis\n              role=master\nAnnotations:  \nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=redis\n           role=master\n  Containers:\n   redis-master:\n    Image:        gcr.io/kubernetes-e2e-test-images/redis:1.0\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  \n    Mounts:       \n  Volumes:        \nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  7s    replication-controller  Created pod: redis-master-5ml78\n"
Feb  4 14:18:53.502: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=kubectl-7521'
Feb  4 14:18:53.626: INFO: stderr: ""
Feb  4 14:18:53.626: INFO: stdout: "Name:              redis-master\nNamespace:         kubectl-7521\nLabels:            app=redis\n                   role=master\nAnnotations:       \nSelector:          app=redis,role=master\nType:              ClusterIP\nIP:                10.102.78.35\nPort:                6379/TCP\nTargetPort:        redis-server/TCP\nEndpoints:         10.44.0.1:6379\nSession Affinity:  None\nEvents:            \n"
Feb  4 14:18:53.638: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node iruya-node'
Feb  4 14:18:53.740: INFO: stderr: ""
Feb  4 14:18:53.741: INFO: stdout: "Name:               iruya-node\nRoles:              \nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=iruya-node\n                    kubernetes.io/os=linux\nAnnotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Sun, 04 Aug 2019 09:01:39 +0000\nTaints:             \nUnschedulable:      false\nConditions:\n  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----                 ------  -----------------                 ------------------                ------                       -------\n  NetworkUnavailable   False   Sat, 12 Oct 2019 11:56:49 +0000   Sat, 12 Oct 2019 11:56:49 +0000   WeaveIsUp                    Weave pod has set this\n  MemoryPressure       False   Tue, 04 Feb 2020 14:18:00 +0000   Sun, 04 Aug 2019 09:01:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure         False   Tue, 04 Feb 2020 14:18:00 +0000   Sun, 04 Aug 2019 09:01:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure          False   Tue, 04 Feb 2020 14:18:00 +0000   Sun, 04 Aug 2019 09:01:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready                True    Tue, 04 Feb 2020 14:18:00 +0000   Sun, 04 Aug 2019 09:02:19 +0000   KubeletReady                 kubelet is posting ready status. AppArmor enabled\nAddresses:\n  InternalIP:  10.96.3.65\n  Hostname:    iruya-node\nCapacity:\n cpu:                4\n ephemeral-storage:  20145724Ki\n hugepages-2Mi:      0\n memory:             4039076Ki\n pods:               110\nAllocatable:\n cpu:                4\n ephemeral-storage:  18566299208\n hugepages-2Mi:      0\n memory:             3936676Ki\n pods:               110\nSystem Info:\n Machine ID:                 f573dcf04d6f4a87856a35d266a2fa7a\n System UUID:                F573DCF0-4D6F-4A87-856A-35D266A2FA7A\n Boot ID:                    8baf4beb-8391-43e6-b17b-b1e184b5370a\n Kernel Version:             4.15.0-52-generic\n OS Image:                   Ubuntu 18.04.2 LTS\n Operating System:           linux\n Architecture:               amd64\n Container Runtime Version:  docker://18.9.7\n Kubelet Version:            v1.15.1\n Kube-Proxy Version:         v1.15.1\nPodCIDR:                     10.96.1.0/24\nNon-terminated Pods:         (3 in total)\n  Namespace                  Name                  CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                  ----                  ------------  ----------  ---------------  -------------  ---\n  kube-system                kube-proxy-976zl      0 (0%)        0 (0%)      0 (0%)           0 (0%)         184d\n  kube-system                weave-net-rlp57       20m (0%)      0 (0%)      0 (0%)           0 (0%)         115d\n  kubectl-7521               redis-master-5ml78    0 (0%)        0 (0%)      0 (0%)           0 (0%)         7s\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests  Limits\n  --------           --------  ------\n  cpu                20m (0%)  0 (0%)\n  memory             0 (0%)    0 (0%)\n  ephemeral-storage  0 (0%)    0 (0%)\nEvents:              \n"
Feb  4 14:18:53.741: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-7521'
Feb  4 14:18:53.836: INFO: stderr: ""
Feb  4 14:18:53.836: INFO: stdout: "Name:         kubectl-7521\nLabels:       e2e-framework=kubectl\n              e2e-run=dd3dcc5b-0f6b-485b-ba78-f7d520ec03e1\nAnnotations:  \nStatus:       Active\n\nNo resource quota.\n\nNo resource limits.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 14:18:53.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7521" for this suite.
Feb  4 14:19:15.910: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 14:19:16.021: INFO: namespace kubectl-7521 deletion completed in 22.180529827s

• [SLOW TEST:34.718 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl describe
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if kubectl describe prints relevant information for rc and pods  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  4 14:19:16.022: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-map-e6b911a1-b647-420a-b10c-c5963e6364f9
STEP: Creating a pod to test consume secrets
Feb  4 14:19:16.208: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-0271c7af-ed44-4728-bd0f-8be7f9cc8a81" in namespace "projected-9308" to be "success or failure"
Feb  4 14:19:16.227: INFO: Pod "pod-projected-secrets-0271c7af-ed44-4728-bd0f-8be7f9cc8a81": Phase="Pending", Reason="", readiness=false. Elapsed: 19.610279ms
Feb  4 14:19:18.239: INFO: Pod "pod-projected-secrets-0271c7af-ed44-4728-bd0f-8be7f9cc8a81": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031659347s
Feb  4 14:19:20.247: INFO: Pod "pod-projected-secrets-0271c7af-ed44-4728-bd0f-8be7f9cc8a81": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039397732s
Feb  4 14:19:22.257: INFO: Pod "pod-projected-secrets-0271c7af-ed44-4728-bd0f-8be7f9cc8a81": Phase="Pending", Reason="", readiness=false. Elapsed: 6.049708153s
Feb  4 14:19:24.270: INFO: Pod "pod-projected-secrets-0271c7af-ed44-4728-bd0f-8be7f9cc8a81": Phase="Pending", Reason="", readiness=false. Elapsed: 8.062094929s
Feb  4 14:19:26.282: INFO: Pod "pod-projected-secrets-0271c7af-ed44-4728-bd0f-8be7f9cc8a81": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.074685526s
STEP: Saw pod success
Feb  4 14:19:26.283: INFO: Pod "pod-projected-secrets-0271c7af-ed44-4728-bd0f-8be7f9cc8a81" satisfied condition "success or failure"
Feb  4 14:19:26.289: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-0271c7af-ed44-4728-bd0f-8be7f9cc8a81 container projected-secret-volume-test: 
STEP: delete the pod
Feb  4 14:19:26.590: INFO: Waiting for pod pod-projected-secrets-0271c7af-ed44-4728-bd0f-8be7f9cc8a81 to disappear
Feb  4 14:19:26.604: INFO: Pod pod-projected-secrets-0271c7af-ed44-4728-bd0f-8be7f9cc8a81 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 14:19:26.604: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9308" for this suite.
Feb  4 14:19:32.633: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 14:19:32.754: INFO: namespace projected-9308 deletion completed in 6.14201359s

• [SLOW TEST:16.732 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  4 14:19:32.754: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on tmpfs
Feb  4 14:19:32.903: INFO: Waiting up to 5m0s for pod "pod-3fc53dc2-bd52-4e94-973f-f0aae0a73849" in namespace "emptydir-2349" to be "success or failure"
Feb  4 14:19:32.916: INFO: Pod "pod-3fc53dc2-bd52-4e94-973f-f0aae0a73849": Phase="Pending", Reason="", readiness=false. Elapsed: 13.357015ms
Feb  4 14:19:34.924: INFO: Pod "pod-3fc53dc2-bd52-4e94-973f-f0aae0a73849": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020939337s
Feb  4 14:19:36.931: INFO: Pod "pod-3fc53dc2-bd52-4e94-973f-f0aae0a73849": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028513378s
Feb  4 14:19:38.942: INFO: Pod "pod-3fc53dc2-bd52-4e94-973f-f0aae0a73849": Phase="Pending", Reason="", readiness=false. Elapsed: 6.039185338s
Feb  4 14:19:40.953: INFO: Pod "pod-3fc53dc2-bd52-4e94-973f-f0aae0a73849": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.050147896s
STEP: Saw pod success
Feb  4 14:19:40.953: INFO: Pod "pod-3fc53dc2-bd52-4e94-973f-f0aae0a73849" satisfied condition "success or failure"
Feb  4 14:19:40.958: INFO: Trying to get logs from node iruya-node pod pod-3fc53dc2-bd52-4e94-973f-f0aae0a73849 container test-container: 
STEP: delete the pod
Feb  4 14:19:41.033: INFO: Waiting for pod pod-3fc53dc2-bd52-4e94-973f-f0aae0a73849 to disappear
Feb  4 14:19:41.039: INFO: Pod pod-3fc53dc2-bd52-4e94-973f-f0aae0a73849 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 14:19:41.040: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2349" for this suite.
Feb  4 14:19:47.089: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 14:19:47.183: INFO: namespace emptydir-2349 deletion completed in 6.13815465s

• [SLOW TEST:14.429 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  4 14:19:47.184: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Feb  4 14:19:56.209: INFO: Successfully updated pod "labelsupdateb9518236-918a-47fb-901d-8e837e2e2f64"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 14:20:00.315: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-429" for this suite.
Feb  4 14:20:22.356: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 14:20:22.451: INFO: namespace downward-api-429 deletion completed in 22.121759128s

• [SLOW TEST:35.267 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  4 14:20:22.451: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb  4 14:20:22.525: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9fb11fc7-1e5c-48a6-8a0b-8d0a46522aba" in namespace "projected-434" to be "success or failure"
Feb  4 14:20:22.611: INFO: Pod "downwardapi-volume-9fb11fc7-1e5c-48a6-8a0b-8d0a46522aba": Phase="Pending", Reason="", readiness=false. Elapsed: 85.629196ms
Feb  4 14:20:24.622: INFO: Pod "downwardapi-volume-9fb11fc7-1e5c-48a6-8a0b-8d0a46522aba": Phase="Pending", Reason="", readiness=false. Elapsed: 2.09673028s
Feb  4 14:20:26.629: INFO: Pod "downwardapi-volume-9fb11fc7-1e5c-48a6-8a0b-8d0a46522aba": Phase="Pending", Reason="", readiness=false. Elapsed: 4.103484326s
Feb  4 14:20:28.643: INFO: Pod "downwardapi-volume-9fb11fc7-1e5c-48a6-8a0b-8d0a46522aba": Phase="Pending", Reason="", readiness=false. Elapsed: 6.117034368s
Feb  4 14:20:30.655: INFO: Pod "downwardapi-volume-9fb11fc7-1e5c-48a6-8a0b-8d0a46522aba": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.129457611s
STEP: Saw pod success
Feb  4 14:20:30.655: INFO: Pod "downwardapi-volume-9fb11fc7-1e5c-48a6-8a0b-8d0a46522aba" satisfied condition "success or failure"
Feb  4 14:20:30.659: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-9fb11fc7-1e5c-48a6-8a0b-8d0a46522aba container client-container: 
STEP: delete the pod
Feb  4 14:20:30.737: INFO: Waiting for pod downwardapi-volume-9fb11fc7-1e5c-48a6-8a0b-8d0a46522aba to disappear
Feb  4 14:20:30.808: INFO: Pod downwardapi-volume-9fb11fc7-1e5c-48a6-8a0b-8d0a46522aba no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 14:20:30.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-434" for this suite.
Feb  4 14:20:36.836: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 14:20:37.000: INFO: namespace projected-434 deletion completed in 6.184567576s

• [SLOW TEST:14.549 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  4 14:20:37.002: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Feb  4 14:20:46.277: INFO: Expected: &{} to match Container's Termination Message:  --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 14:20:46.451: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-2376" for this suite.
Feb  4 14:20:52.487: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 14:20:52.585: INFO: namespace container-runtime-2376 deletion completed in 6.126277768s

• [SLOW TEST:15.584 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  4 14:20:52.587: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-c37f736e-b493-407d-b731-54e77985424d
STEP: Creating a pod to test consume configMaps
Feb  4 14:20:52.727: INFO: Waiting up to 5m0s for pod "pod-configmaps-077a6e21-49c8-4fdb-bf75-66ca29492b28" in namespace "configmap-6682" to be "success or failure"
Feb  4 14:20:52.835: INFO: Pod "pod-configmaps-077a6e21-49c8-4fdb-bf75-66ca29492b28": Phase="Pending", Reason="", readiness=false. Elapsed: 107.251238ms
Feb  4 14:20:54.844: INFO: Pod "pod-configmaps-077a6e21-49c8-4fdb-bf75-66ca29492b28": Phase="Pending", Reason="", readiness=false. Elapsed: 2.116889839s
Feb  4 14:20:56.859: INFO: Pod "pod-configmaps-077a6e21-49c8-4fdb-bf75-66ca29492b28": Phase="Pending", Reason="", readiness=false. Elapsed: 4.131717333s
Feb  4 14:20:58.873: INFO: Pod "pod-configmaps-077a6e21-49c8-4fdb-bf75-66ca29492b28": Phase="Pending", Reason="", readiness=false. Elapsed: 6.145668914s
Feb  4 14:21:00.883: INFO: Pod "pod-configmaps-077a6e21-49c8-4fdb-bf75-66ca29492b28": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.155860718s
STEP: Saw pod success
Feb  4 14:21:00.883: INFO: Pod "pod-configmaps-077a6e21-49c8-4fdb-bf75-66ca29492b28" satisfied condition "success or failure"
Feb  4 14:21:00.886: INFO: Trying to get logs from node iruya-node pod pod-configmaps-077a6e21-49c8-4fdb-bf75-66ca29492b28 container configmap-volume-test: 
STEP: delete the pod
Feb  4 14:21:00.965: INFO: Waiting for pod pod-configmaps-077a6e21-49c8-4fdb-bf75-66ca29492b28 to disappear
Feb  4 14:21:00.972: INFO: Pod pod-configmaps-077a6e21-49c8-4fdb-bf75-66ca29492b28 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 14:21:00.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6682" for this suite.
Feb  4 14:21:07.012: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 14:21:07.118: INFO: namespace configmap-6682 deletion completed in 6.135987287s

• [SLOW TEST:14.532 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  4 14:21:07.119: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-2896.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-2896.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2896.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-2896.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-2896.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2896.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe /etc/hosts
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb  4 14:21:19.291: INFO: Unable to read wheezy_udp@PodARecord from pod dns-2896/dns-test-b115e9b2-9ac8-4c4d-935c-a1288369fd9a: the server could not find the requested resource (get pods dns-test-b115e9b2-9ac8-4c4d-935c-a1288369fd9a)
Feb  4 14:21:19.298: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-2896/dns-test-b115e9b2-9ac8-4c4d-935c-a1288369fd9a: the server could not find the requested resource (get pods dns-test-b115e9b2-9ac8-4c4d-935c-a1288369fd9a)
Feb  4 14:21:19.308: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.dns-2896.svc.cluster.local from pod dns-2896/dns-test-b115e9b2-9ac8-4c4d-935c-a1288369fd9a: the server could not find the requested resource (get pods dns-test-b115e9b2-9ac8-4c4d-935c-a1288369fd9a)
Feb  4 14:21:19.320: INFO: Unable to read jessie_hosts@dns-querier-1 from pod dns-2896/dns-test-b115e9b2-9ac8-4c4d-935c-a1288369fd9a: the server could not find the requested resource (get pods dns-test-b115e9b2-9ac8-4c4d-935c-a1288369fd9a)
Feb  4 14:21:19.328: INFO: Unable to read jessie_udp@PodARecord from pod dns-2896/dns-test-b115e9b2-9ac8-4c4d-935c-a1288369fd9a: the server could not find the requested resource (get pods dns-test-b115e9b2-9ac8-4c4d-935c-a1288369fd9a)
Feb  4 14:21:19.334: INFO: Unable to read jessie_tcp@PodARecord from pod dns-2896/dns-test-b115e9b2-9ac8-4c4d-935c-a1288369fd9a: the server could not find the requested resource (get pods dns-test-b115e9b2-9ac8-4c4d-935c-a1288369fd9a)
Feb  4 14:21:19.334: INFO: Lookups using dns-2896/dns-test-b115e9b2-9ac8-4c4d-935c-a1288369fd9a failed for: [wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_hosts@dns-querier-1.dns-test-service.dns-2896.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord]

Feb  4 14:21:24.391: INFO: DNS probes using dns-2896/dns-test-b115e9b2-9ac8-4c4d-935c-a1288369fd9a succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 14:21:24.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-2896" for this suite.
Feb  4 14:21:30.540: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 14:21:30.723: INFO: namespace dns-2896 deletion completed in 6.227560023s

• [SLOW TEST:23.604 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  4 14:21:30.724: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on tmpfs
Feb  4 14:21:30.859: INFO: Waiting up to 5m0s for pod "pod-da19e6a8-7d8b-49f4-9858-d6c8c2d09cf5" in namespace "emptydir-990" to be "success or failure"
Feb  4 14:21:30.880: INFO: Pod "pod-da19e6a8-7d8b-49f4-9858-d6c8c2d09cf5": Phase="Pending", Reason="", readiness=false. Elapsed: 20.744944ms
Feb  4 14:21:32.890: INFO: Pod "pod-da19e6a8-7d8b-49f4-9858-d6c8c2d09cf5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03123057s
Feb  4 14:21:34.904: INFO: Pod "pod-da19e6a8-7d8b-49f4-9858-d6c8c2d09cf5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045545188s
Feb  4 14:21:36.967: INFO: Pod "pod-da19e6a8-7d8b-49f4-9858-d6c8c2d09cf5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.108333866s
Feb  4 14:21:38.977: INFO: Pod "pod-da19e6a8-7d8b-49f4-9858-d6c8c2d09cf5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.118419925s
Feb  4 14:21:40.985: INFO: Pod "pod-da19e6a8-7d8b-49f4-9858-d6c8c2d09cf5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.125733072s
STEP: Saw pod success
Feb  4 14:21:40.985: INFO: Pod "pod-da19e6a8-7d8b-49f4-9858-d6c8c2d09cf5" satisfied condition "success or failure"
Feb  4 14:21:40.988: INFO: Trying to get logs from node iruya-node pod pod-da19e6a8-7d8b-49f4-9858-d6c8c2d09cf5 container test-container: 
STEP: delete the pod
Feb  4 14:21:41.047: INFO: Waiting for pod pod-da19e6a8-7d8b-49f4-9858-d6c8c2d09cf5 to disappear
Feb  4 14:21:41.100: INFO: Pod pod-da19e6a8-7d8b-49f4-9858-d6c8c2d09cf5 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 14:21:41.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-990" for this suite.
Feb  4 14:21:47.127: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 14:21:47.211: INFO: namespace emptydir-990 deletion completed in 6.106263804s

• [SLOW TEST:16.487 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  4 14:21:47.211: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-54f71e07-5a61-422e-b2cd-e7ce587da9d0
STEP: Creating a pod to test consume secrets
Feb  4 14:21:47.329: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-948a0b34-a855-40d8-b7fb-8fb8792076dc" in namespace "projected-1485" to be "success or failure"
Feb  4 14:21:47.334: INFO: Pod "pod-projected-secrets-948a0b34-a855-40d8-b7fb-8fb8792076dc": Phase="Pending", Reason="", readiness=false. Elapsed: 5.058582ms
Feb  4 14:21:49.343: INFO: Pod "pod-projected-secrets-948a0b34-a855-40d8-b7fb-8fb8792076dc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013510616s
Feb  4 14:21:51.351: INFO: Pod "pod-projected-secrets-948a0b34-a855-40d8-b7fb-8fb8792076dc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.0215644s
Feb  4 14:21:53.364: INFO: Pod "pod-projected-secrets-948a0b34-a855-40d8-b7fb-8fb8792076dc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.035053767s
Feb  4 14:21:55.371: INFO: Pod "pod-projected-secrets-948a0b34-a855-40d8-b7fb-8fb8792076dc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.041638042s
STEP: Saw pod success
Feb  4 14:21:55.371: INFO: Pod "pod-projected-secrets-948a0b34-a855-40d8-b7fb-8fb8792076dc" satisfied condition "success or failure"
Feb  4 14:21:55.374: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-948a0b34-a855-40d8-b7fb-8fb8792076dc container projected-secret-volume-test: 
STEP: delete the pod
Feb  4 14:21:55.459: INFO: Waiting for pod pod-projected-secrets-948a0b34-a855-40d8-b7fb-8fb8792076dc to disappear
Feb  4 14:21:55.464: INFO: Pod pod-projected-secrets-948a0b34-a855-40d8-b7fb-8fb8792076dc no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 14:21:55.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1485" for this suite.
Feb  4 14:22:01.497: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 14:22:01.622: INFO: namespace projected-1485 deletion completed in 6.143093423s

• [SLOW TEST:14.411 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  4 14:22:01.622: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 14:22:01.765: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-9109" for this suite.
Feb  4 14:22:07.835: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 14:22:07.966: INFO: namespace kubelet-test-9109 deletion completed in 6.195735567s

• [SLOW TEST:6.344 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should be possible to delete [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  4 14:22:07.967: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting a background goroutine to produce watch events
STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 14:22:13.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-13" for this suite.
Feb  4 14:22:19.747: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 14:22:19.940: INFO: namespace watch-13 deletion completed in 6.224944516s

• [SLOW TEST:11.973 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  4 14:22:19.940: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Feb  4 14:22:38.354: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb  4 14:22:38.374: INFO: Pod pod-with-prestop-http-hook still exists
Feb  4 14:22:40.374: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb  4 14:22:40.386: INFO: Pod pod-with-prestop-http-hook still exists
Feb  4 14:22:42.374: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb  4 14:22:42.388: INFO: Pod pod-with-prestop-http-hook still exists
Feb  4 14:22:44.374: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb  4 14:22:44.387: INFO: Pod pod-with-prestop-http-hook still exists
Feb  4 14:22:46.374: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb  4 14:22:46.385: INFO: Pod pod-with-prestop-http-hook still exists
Feb  4 14:22:48.374: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb  4 14:22:48.386: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 14:22:48.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-2484" for this suite.
Feb  4 14:23:10.478: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 14:23:10.618: INFO: namespace container-lifecycle-hook-2484 deletion completed in 22.193882911s

• [SLOW TEST:50.678 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  4 14:23:10.620: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W0204 14:23:16.935306       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb  4 14:23:16.935: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 14:23:16.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-2676" for this suite.
Feb  4 14:23:29.004: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 14:23:29.095: INFO: namespace gc-2676 deletion completed in 12.155514941s

• [SLOW TEST:18.475 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  4 14:23:29.095: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on tmpfs
Feb  4 14:23:29.296: INFO: Waiting up to 5m0s for pod "pod-2f94f728-e681-46c8-83b2-a15c8205ecf2" in namespace "emptydir-4" to be "success or failure"
Feb  4 14:23:29.305: INFO: Pod "pod-2f94f728-e681-46c8-83b2-a15c8205ecf2": Phase="Pending", Reason="", readiness=false. Elapsed: 9.049012ms
Feb  4 14:23:31.319: INFO: Pod "pod-2f94f728-e681-46c8-83b2-a15c8205ecf2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022761839s
Feb  4 14:23:33.329: INFO: Pod "pod-2f94f728-e681-46c8-83b2-a15c8205ecf2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032465313s
Feb  4 14:23:35.337: INFO: Pod "pod-2f94f728-e681-46c8-83b2-a15c8205ecf2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.04108998s
Feb  4 14:23:37.355: INFO: Pod "pod-2f94f728-e681-46c8-83b2-a15c8205ecf2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.059062562s
Feb  4 14:23:39.370: INFO: Pod "pod-2f94f728-e681-46c8-83b2-a15c8205ecf2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.074210671s
STEP: Saw pod success
Feb  4 14:23:39.371: INFO: Pod "pod-2f94f728-e681-46c8-83b2-a15c8205ecf2" satisfied condition "success or failure"
Feb  4 14:23:39.374: INFO: Trying to get logs from node iruya-node pod pod-2f94f728-e681-46c8-83b2-a15c8205ecf2 container test-container: 
STEP: delete the pod
Feb  4 14:23:39.481: INFO: Waiting for pod pod-2f94f728-e681-46c8-83b2-a15c8205ecf2 to disappear
Feb  4 14:23:39.493: INFO: Pod pod-2f94f728-e681-46c8-83b2-a15c8205ecf2 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 14:23:39.493: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4" for this suite.
Feb  4 14:23:45.588: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 14:23:45.780: INFO: namespace emptydir-4 deletion completed in 6.276579882s

• [SLOW TEST:16.685 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  4 14:23:45.780: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-6829
[It] Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating stateful set ss in namespace statefulset-6829
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-6829
Feb  4 14:23:45.966: INFO: Found 0 stateful pods, waiting for 1
Feb  4 14:23:55.976: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Feb  4 14:23:55.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6829 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb  4 14:23:57.450: INFO: stderr: "I0204 14:23:56.216147    1834 log.go:172] (0xc0001428f0) (0xc00096b040) Create stream\nI0204 14:23:56.216479    1834 log.go:172] (0xc0001428f0) (0xc00096b040) Stream added, broadcasting: 1\nI0204 14:23:56.236802    1834 log.go:172] (0xc0001428f0) Reply frame received for 1\nI0204 14:23:56.237110    1834 log.go:172] (0xc0001428f0) (0xc00096a000) Create stream\nI0204 14:23:56.237165    1834 log.go:172] (0xc0001428f0) (0xc00096a000) Stream added, broadcasting: 3\nI0204 14:23:56.241039    1834 log.go:172] (0xc0001428f0) Reply frame received for 3\nI0204 14:23:56.241085    1834 log.go:172] (0xc0001428f0) (0xc0003cc1e0) Create stream\nI0204 14:23:56.241099    1834 log.go:172] (0xc0001428f0) (0xc0003cc1e0) Stream added, broadcasting: 5\nI0204 14:23:56.243668    1834 log.go:172] (0xc0001428f0) Reply frame received for 5\nI0204 14:23:56.551417    1834 log.go:172] (0xc0001428f0) Data frame received for 5\nI0204 14:23:56.551800    1834 log.go:172] (0xc0003cc1e0) (5) Data frame handling\nI0204 14:23:56.551858    1834 log.go:172] (0xc0003cc1e0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0204 14:23:57.251993    1834 log.go:172] (0xc0001428f0) Data frame received for 3\nI0204 14:23:57.252054    1834 log.go:172] (0xc00096a000) (3) Data frame handling\nI0204 14:23:57.252109    1834 log.go:172] (0xc00096a000) (3) Data frame sent\nI0204 14:23:57.438403    1834 log.go:172] (0xc0001428f0) Data frame received for 1\nI0204 14:23:57.438519    1834 log.go:172] (0xc00096b040) (1) Data frame handling\nI0204 14:23:57.438589    1834 log.go:172] (0xc00096b040) (1) Data frame sent\nI0204 14:23:57.438629    1834 log.go:172] (0xc0001428f0) (0xc00096b040) Stream removed, broadcasting: 1\nI0204 14:23:57.439131    1834 log.go:172] (0xc0001428f0) (0xc00096a000) Stream removed, broadcasting: 3\nI0204 14:23:57.439324    1834 log.go:172] (0xc0001428f0) (0xc0003cc1e0) Stream removed, broadcasting: 5\nI0204 14:23:57.439366    1834 log.go:172] (0xc0001428f0) (0xc00096b040) Stream removed, broadcasting: 1\nI0204 14:23:57.439382    1834 log.go:172] (0xc0001428f0) (0xc00096a000) Stream removed, broadcasting: 3\nI0204 14:23:57.439392    1834 log.go:172] (0xc0001428f0) (0xc0003cc1e0) Stream removed, broadcasting: 5\nI0204 14:23:57.439494    1834 log.go:172] (0xc0001428f0) Go away received\n"
Feb  4 14:23:57.450: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb  4 14:23:57.450: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb  4 14:23:57.459: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Feb  4 14:24:07.504: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb  4 14:24:07.504: INFO: Waiting for statefulset status.replicas updated to 0
Feb  4 14:24:07.574: INFO: POD   NODE        PHASE    GRACE  CONDITIONS
Feb  4 14:24:07.574: INFO: ss-0  iruya-node  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 14:23:46 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-04 14:23:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-04 14:23:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 14:23:45 +0000 UTC  }]
Feb  4 14:24:07.574: INFO: 
Feb  4 14:24:07.574: INFO: StatefulSet ss has not reached scale 3, at 1
Feb  4 14:24:08.782: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.980804705s
Feb  4 14:24:09.801: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.773243807s
Feb  4 14:24:11.073: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.75385138s
Feb  4 14:24:12.254: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.48177862s
Feb  4 14:24:13.270: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.301039657s
Feb  4 14:24:14.277: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.285482627s
Feb  4 14:24:15.289: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.278497269s
Feb  4 14:24:16.404: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.266210997s
Feb  4 14:24:17.416: INFO: Verifying statefulset ss doesn't scale past 3 for another 150.888755ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-6829
Feb  4 14:24:18.519: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6829 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  4 14:24:19.230: INFO: stderr: "I0204 14:24:18.717067    1854 log.go:172] (0xc000128790) (0xc00064e960) Create stream\nI0204 14:24:18.717315    1854 log.go:172] (0xc000128790) (0xc00064e960) Stream added, broadcasting: 1\nI0204 14:24:18.723849    1854 log.go:172] (0xc000128790) Reply frame received for 1\nI0204 14:24:18.723885    1854 log.go:172] (0xc000128790) (0xc000316000) Create stream\nI0204 14:24:18.723893    1854 log.go:172] (0xc000128790) (0xc000316000) Stream added, broadcasting: 3\nI0204 14:24:18.725694    1854 log.go:172] (0xc000128790) Reply frame received for 3\nI0204 14:24:18.725722    1854 log.go:172] (0xc000128790) (0xc00064ea00) Create stream\nI0204 14:24:18.725732    1854 log.go:172] (0xc000128790) (0xc00064ea00) Stream added, broadcasting: 5\nI0204 14:24:18.729531    1854 log.go:172] (0xc000128790) Reply frame received for 5\nI0204 14:24:19.002146    1854 log.go:172] (0xc000128790) Data frame received for 5\nI0204 14:24:19.002193    1854 log.go:172] (0xc00064ea00) (5) Data frame handling\nI0204 14:24:19.002200    1854 log.go:172] (0xc00064ea00) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0204 14:24:19.002244    1854 log.go:172] (0xc000128790) Data frame received for 3\nI0204 14:24:19.002259    1854 log.go:172] (0xc000316000) (3) Data frame handling\nI0204 14:24:19.002272    1854 log.go:172] (0xc000316000) (3) Data frame sent\nI0204 14:24:19.221440    1854 log.go:172] (0xc000128790) Data frame received for 1\nI0204 14:24:19.221523    1854 log.go:172] (0xc000128790) (0xc000316000) Stream removed, broadcasting: 3\nI0204 14:24:19.221562    1854 log.go:172] (0xc00064e960) (1) Data frame handling\nI0204 14:24:19.221584    1854 log.go:172] (0xc00064e960) (1) Data frame sent\nI0204 14:24:19.221591    1854 log.go:172] (0xc000128790) (0xc00064e960) Stream removed, broadcasting: 1\nI0204 14:24:19.222187    1854 log.go:172] (0xc000128790) (0xc00064ea00) Stream removed, broadcasting: 5\nI0204 14:24:19.222275    1854 log.go:172] (0xc000128790) (0xc00064e960) Stream removed, broadcasting: 1\nI0204 14:24:19.222293    1854 log.go:172] (0xc000128790) (0xc000316000) Stream removed, broadcasting: 3\nI0204 14:24:19.222322    1854 log.go:172] (0xc000128790) (0xc00064ea00) Stream removed, broadcasting: 5\n"
Feb  4 14:24:19.231: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb  4 14:24:19.231: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb  4 14:24:19.231: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6829 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  4 14:24:19.397: INFO: rc: 1
Feb  4 14:24:19.398: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6829 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    error: unable to upgrade connection: container not found ("nginx")
 []  0xc003841380 exit status 1   true [0xc00206ed88 0xc00206eda0 0xc00206edb8] [0xc00206ed88 0xc00206eda0 0xc00206edb8] [0xc00206ed98 0xc00206edb0] [0xba6c50 0xba6c50] 0xc002525d40 }:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("nginx")

error:
exit status 1
Feb  4 14:24:29.399: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6829 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  4 14:24:30.023: INFO: stderr: "I0204 14:24:29.599277    1885 log.go:172] (0xc00066ca50) (0xc0009165a0) Create stream\nI0204 14:24:29.599406    1885 log.go:172] (0xc00066ca50) (0xc0009165a0) Stream added, broadcasting: 1\nI0204 14:24:29.602962    1885 log.go:172] (0xc00066ca50) Reply frame received for 1\nI0204 14:24:29.603010    1885 log.go:172] (0xc00066ca50) (0xc000912000) Create stream\nI0204 14:24:29.603031    1885 log.go:172] (0xc00066ca50) (0xc000912000) Stream added, broadcasting: 3\nI0204 14:24:29.603982    1885 log.go:172] (0xc00066ca50) Reply frame received for 3\nI0204 14:24:29.604006    1885 log.go:172] (0xc00066ca50) (0xc000916640) Create stream\nI0204 14:24:29.604015    1885 log.go:172] (0xc00066ca50) (0xc000916640) Stream added, broadcasting: 5\nI0204 14:24:29.607870    1885 log.go:172] (0xc00066ca50) Reply frame received for 5\nI0204 14:24:29.864219    1885 log.go:172] (0xc00066ca50) Data frame received for 5\nI0204 14:24:29.864266    1885 log.go:172] (0xc000916640) (5) Data frame handling\nI0204 14:24:29.864286    1885 log.go:172] (0xc000916640) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0204 14:24:29.905799    1885 log.go:172] (0xc00066ca50) Data frame received for 3\nI0204 14:24:29.905867    1885 log.go:172] (0xc000912000) (3) Data frame handling\nI0204 14:24:29.905888    1885 log.go:172] (0xc000912000) (3) Data frame sent\nI0204 14:24:29.905972    1885 log.go:172] (0xc00066ca50) Data frame received for 5\nI0204 14:24:29.905987    1885 log.go:172] (0xc000916640) (5) Data frame handling\nI0204 14:24:29.905999    1885 log.go:172] (0xc000916640) (5) Data frame sent\nI0204 14:24:29.906016    1885 log.go:172] (0xc00066ca50) Data frame received for 5\nI0204 14:24:29.906026    1885 log.go:172] (0xc000916640) (5) Data frame handling\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0204 14:24:29.906046    1885 log.go:172] (0xc000916640) (5) Data frame sent\nI0204 14:24:30.014631    1885 log.go:172] (0xc00066ca50) Data frame received for 1\nI0204 14:24:30.014689    1885 log.go:172] (0xc00066ca50) (0xc000912000) Stream removed, broadcasting: 3\nI0204 14:24:30.014715    1885 log.go:172] (0xc0009165a0) (1) Data frame handling\nI0204 14:24:30.014724    1885 log.go:172] (0xc0009165a0) (1) Data frame sent\nI0204 14:24:30.014732    1885 log.go:172] (0xc00066ca50) (0xc0009165a0) Stream removed, broadcasting: 1\nI0204 14:24:30.015136    1885 log.go:172] (0xc00066ca50) (0xc000916640) Stream removed, broadcasting: 5\nI0204 14:24:30.015203    1885 log.go:172] (0xc00066ca50) (0xc0009165a0) Stream removed, broadcasting: 1\nI0204 14:24:30.015216    1885 log.go:172] (0xc00066ca50) (0xc000912000) Stream removed, broadcasting: 3\nI0204 14:24:30.015222    1885 log.go:172] (0xc00066ca50) (0xc000916640) Stream removed, broadcasting: 5\n"
Feb  4 14:24:30.023: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb  4 14:24:30.023: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb  4 14:24:30.023: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6829 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  4 14:24:30.749: INFO: stderr: "I0204 14:24:30.250697    1905 log.go:172] (0xc0008a2a50) (0xc00089ad20) Create stream\nI0204 14:24:30.250837    1905 log.go:172] (0xc0008a2a50) (0xc00089ad20) Stream added, broadcasting: 1\nI0204 14:24:30.261142    1905 log.go:172] (0xc0008a2a50) Reply frame received for 1\nI0204 14:24:30.261246    1905 log.go:172] (0xc0008a2a50) (0xc0007d8500) Create stream\nI0204 14:24:30.261277    1905 log.go:172] (0xc0008a2a50) (0xc0007d8500) Stream added, broadcasting: 3\nI0204 14:24:30.265883    1905 log.go:172] (0xc0008a2a50) Reply frame received for 3\nI0204 14:24:30.265911    1905 log.go:172] (0xc0008a2a50) (0xc0007d8000) Create stream\nI0204 14:24:30.265921    1905 log.go:172] (0xc0008a2a50) (0xc0007d8000) Stream added, broadcasting: 5\nI0204 14:24:30.269929    1905 log.go:172] (0xc0008a2a50) Reply frame received for 5\nI0204 14:24:30.427984    1905 log.go:172] (0xc0008a2a50) Data frame received for 5\nI0204 14:24:30.428503    1905 log.go:172] (0xc0007d8000) (5) Data frame handling\nI0204 14:24:30.428660    1905 log.go:172] (0xc0007d8000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0204 14:24:30.428790    1905 log.go:172] (0xc0008a2a50) Data frame received for 3\nI0204 14:24:30.428952    1905 log.go:172] (0xc0007d8500) (3) Data frame handling\nI0204 14:24:30.428981    1905 log.go:172] (0xc0007d8500) (3) Data frame sent\nI0204 14:24:30.736705    1905 log.go:172] (0xc0008a2a50) (0xc0007d8500) Stream removed, broadcasting: 3\nI0204 14:24:30.736912    1905 log.go:172] (0xc0008a2a50) Data frame received for 1\nI0204 14:24:30.736926    1905 log.go:172] (0xc00089ad20) (1) Data frame handling\nI0204 14:24:30.736954    1905 log.go:172] (0xc00089ad20) (1) Data frame sent\nI0204 14:24:30.736961    1905 log.go:172] (0xc0008a2a50) (0xc00089ad20) Stream removed, broadcasting: 1\nI0204 14:24:30.737517    1905 log.go:172] (0xc0008a2a50) (0xc0007d8000) Stream removed, broadcasting: 5\nI0204 14:24:30.737541    1905 log.go:172] (0xc0008a2a50) (0xc00089ad20) Stream removed, broadcasting: 1\nI0204 14:24:30.737619    1905 log.go:172] (0xc0008a2a50) (0xc0007d8500) Stream removed, broadcasting: 3\nI0204 14:24:30.737628    1905 log.go:172] (0xc0008a2a50) (0xc0007d8000) Stream removed, broadcasting: 5\n"
Feb  4 14:24:30.750: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb  4 14:24:30.750: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb  4 14:24:30.767: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Feb  4 14:24:30.767: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=false
Feb  4 14:24:40.783: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Feb  4 14:24:40.783: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Feb  4 14:24:40.783: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Feb  4 14:24:40.790: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6829 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb  4 14:24:41.304: INFO: stderr: "I0204 14:24:41.012707    1918 log.go:172] (0xc000118dc0) (0xc000578820) Create stream\nI0204 14:24:41.012785    1918 log.go:172] (0xc000118dc0) (0xc000578820) Stream added, broadcasting: 1\nI0204 14:24:41.019029    1918 log.go:172] (0xc000118dc0) Reply frame received for 1\nI0204 14:24:41.019071    1918 log.go:172] (0xc000118dc0) (0xc00057a000) Create stream\nI0204 14:24:41.019079    1918 log.go:172] (0xc000118dc0) (0xc00057a000) Stream added, broadcasting: 3\nI0204 14:24:41.021359    1918 log.go:172] (0xc000118dc0) Reply frame received for 3\nI0204 14:24:41.021383    1918 log.go:172] (0xc000118dc0) (0xc000704000) Create stream\nI0204 14:24:41.021404    1918 log.go:172] (0xc000118dc0) (0xc000704000) Stream added, broadcasting: 5\nI0204 14:24:41.023153    1918 log.go:172] (0xc000118dc0) Reply frame received for 5\nI0204 14:24:41.159542    1918 log.go:172] (0xc000118dc0) Data frame received for 3\nI0204 14:24:41.159611    1918 log.go:172] (0xc00057a000) (3) Data frame handling\nI0204 14:24:41.159631    1918 log.go:172] (0xc000118dc0) Data frame received for 5\nI0204 14:24:41.159654    1918 log.go:172] (0xc000704000) (5) Data frame handling\nI0204 14:24:41.159662    1918 log.go:172] (0xc00057a000) (3) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0204 14:24:41.159680    1918 log.go:172] (0xc000704000) (5) Data frame sent\nI0204 14:24:41.294633    1918 log.go:172] (0xc000118dc0) (0xc00057a000) Stream removed, broadcasting: 3\nI0204 14:24:41.294796    1918 log.go:172] (0xc000118dc0) Data frame received for 1\nI0204 14:24:41.294854    1918 log.go:172] (0xc000578820) (1) Data frame handling\nI0204 14:24:41.294887    1918 log.go:172] (0xc000578820) (1) Data frame sent\nI0204 14:24:41.294913    1918 log.go:172] (0xc000118dc0) (0xc000578820) Stream removed, broadcasting: 1\nI0204 14:24:41.294949    1918 log.go:172] (0xc000118dc0) (0xc000704000) Stream removed, broadcasting: 5\nI0204 14:24:41.295023    1918 log.go:172] (0xc000118dc0) Go away received\nI0204 14:24:41.295667    1918 log.go:172] (0xc000118dc0) (0xc000578820) Stream removed, broadcasting: 1\nI0204 14:24:41.295690    1918 log.go:172] (0xc000118dc0) (0xc00057a000) Stream removed, broadcasting: 3\nI0204 14:24:41.295697    1918 log.go:172] (0xc000118dc0) (0xc000704000) Stream removed, broadcasting: 5\n"
Feb  4 14:24:41.304: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb  4 14:24:41.304: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb  4 14:24:41.304: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6829 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb  4 14:24:41.730: INFO: stderr: "I0204 14:24:41.521819    1937 log.go:172] (0xc000116f20) (0xc000578c80) Create stream\nI0204 14:24:41.521992    1937 log.go:172] (0xc000116f20) (0xc000578c80) Stream added, broadcasting: 1\nI0204 14:24:41.526729    1937 log.go:172] (0xc000116f20) Reply frame received for 1\nI0204 14:24:41.526785    1937 log.go:172] (0xc000116f20) (0xc00091c000) Create stream\nI0204 14:24:41.526796    1937 log.go:172] (0xc000116f20) (0xc00091c000) Stream added, broadcasting: 3\nI0204 14:24:41.527962    1937 log.go:172] (0xc000116f20) Reply frame received for 3\nI0204 14:24:41.528030    1937 log.go:172] (0xc000116f20) (0xc000950000) Create stream\nI0204 14:24:41.528045    1937 log.go:172] (0xc000116f20) (0xc000950000) Stream added, broadcasting: 5\nI0204 14:24:41.529919    1937 log.go:172] (0xc000116f20) Reply frame received for 5\nI0204 14:24:41.607693    1937 log.go:172] (0xc000116f20) Data frame received for 5\nI0204 14:24:41.607729    1937 log.go:172] (0xc000950000) (5) Data frame handling\nI0204 14:24:41.607750    1937 log.go:172] (0xc000950000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0204 14:24:41.633000    1937 log.go:172] (0xc000116f20) Data frame received for 3\nI0204 14:24:41.633026    1937 log.go:172] (0xc00091c000) (3) Data frame handling\nI0204 14:24:41.633052    1937 log.go:172] (0xc00091c000) (3) Data frame sent\nI0204 14:24:41.724899    1937 log.go:172] (0xc000116f20) Data frame received for 1\nI0204 14:24:41.725118    1937 log.go:172] (0xc000116f20) (0xc000950000) Stream removed, broadcasting: 5\nI0204 14:24:41.725183    1937 log.go:172] (0xc000578c80) (1) Data frame handling\nI0204 14:24:41.725208    1937 log.go:172] (0xc000578c80) (1) Data frame sent\nI0204 14:24:41.725260    1937 log.go:172] (0xc000116f20) (0xc00091c000) Stream removed, broadcasting: 3\nI0204 14:24:41.725300    1937 log.go:172] (0xc000116f20) (0xc000578c80) Stream removed, broadcasting: 1\nI0204 14:24:41.725335    1937 log.go:172] (0xc000116f20) Go away received\nI0204 14:24:41.726053    1937 log.go:172] (0xc000116f20) (0xc000578c80) Stream removed, broadcasting: 1\nI0204 14:24:41.726075    1937 log.go:172] (0xc000116f20) (0xc00091c000) Stream removed, broadcasting: 3\nI0204 14:24:41.726085    1937 log.go:172] (0xc000116f20) (0xc000950000) Stream removed, broadcasting: 5\n"
Feb  4 14:24:41.731: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb  4 14:24:41.731: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb  4 14:24:41.731: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6829 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb  4 14:24:42.332: INFO: stderr: "I0204 14:24:42.002050    1956 log.go:172] (0xc0009c6370) (0xc0008dc780) Create stream\nI0204 14:24:42.002141    1956 log.go:172] (0xc0009c6370) (0xc0008dc780) Stream added, broadcasting: 1\nI0204 14:24:42.012532    1956 log.go:172] (0xc0009c6370) Reply frame received for 1\nI0204 14:24:42.012622    1956 log.go:172] (0xc0009c6370) (0xc000922000) Create stream\nI0204 14:24:42.012634    1956 log.go:172] (0xc0009c6370) (0xc000922000) Stream added, broadcasting: 3\nI0204 14:24:42.015343    1956 log.go:172] (0xc0009c6370) Reply frame received for 3\nI0204 14:24:42.015362    1956 log.go:172] (0xc0009c6370) (0xc0009220a0) Create stream\nI0204 14:24:42.015368    1956 log.go:172] (0xc0009c6370) (0xc0009220a0) Stream added, broadcasting: 5\nI0204 14:24:42.016714    1956 log.go:172] (0xc0009c6370) Reply frame received for 5\nI0204 14:24:42.168928    1956 log.go:172] (0xc0009c6370) Data frame received for 5\nI0204 14:24:42.169029    1956 log.go:172] (0xc0009220a0) (5) Data frame handling\nI0204 14:24:42.169067    1956 log.go:172] (0xc0009220a0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0204 14:24:42.196665    1956 log.go:172] (0xc0009c6370) Data frame received for 3\nI0204 14:24:42.196710    1956 log.go:172] (0xc000922000) (3) Data frame handling\nI0204 14:24:42.196723    1956 log.go:172] (0xc000922000) (3) Data frame sent\nI0204 14:24:42.326438    1956 log.go:172] (0xc0009c6370) (0xc000922000) Stream removed, broadcasting: 3\nI0204 14:24:42.326505    1956 log.go:172] (0xc0009c6370) Data frame received for 1\nI0204 14:24:42.326527    1956 log.go:172] (0xc0008dc780) (1) Data frame handling\nI0204 14:24:42.326590    1956 log.go:172] (0xc0008dc780) (1) Data frame sent\nI0204 14:24:42.326651    1956 log.go:172] (0xc0009c6370) (0xc0009220a0) Stream removed, broadcasting: 5\nI0204 14:24:42.326742    1956 log.go:172] (0xc0009c6370) (0xc0008dc780) Stream removed, broadcasting: 1\nI0204 14:24:42.326765    1956 log.go:172] (0xc0009c6370) Go away received\nI0204 14:24:42.327128    1956 log.go:172] (0xc0009c6370) (0xc0008dc780) Stream removed, broadcasting: 1\nI0204 14:24:42.327147    1956 log.go:172] (0xc0009c6370) (0xc000922000) Stream removed, broadcasting: 3\nI0204 14:24:42.327153    1956 log.go:172] (0xc0009c6370) (0xc0009220a0) Stream removed, broadcasting: 5\n"
Feb  4 14:24:42.333: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb  4 14:24:42.333: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb  4 14:24:42.333: INFO: Waiting for statefulset status.replicas updated to 0
Feb  4 14:24:42.340: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1
Feb  4 14:24:52.351: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb  4 14:24:52.351: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Feb  4 14:24:52.351: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Feb  4 14:24:52.370: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb  4 14:24:52.370: INFO: ss-0  iruya-node                 Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 14:23:46 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-04 14:24:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-04 14:24:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 14:23:45 +0000 UTC  }]
Feb  4 14:24:52.370: INFO: ss-1  iruya-server-sfge57q7djm7  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 14:24:07 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-04 14:24:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-04 14:24:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 14:24:07 +0000 UTC  }]
Feb  4 14:24:52.370: INFO: ss-2  iruya-node                 Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 14:24:07 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-04 14:24:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-04 14:24:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 14:24:07 +0000 UTC  }]
Feb  4 14:24:52.370: INFO: 
Feb  4 14:24:52.370: INFO: StatefulSet ss has not reached scale 0, at 3
Feb  4 14:25:00.128: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb  4 14:25:00.128: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 14:23:46 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-04 14:24:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-04 14:24:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 14:23:45 +0000 UTC  }]
Feb  4 14:25:00.128: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 14:24:07 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-04 14:24:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-04 14:24:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 14:24:07 +0000 UTC  }]
Feb  4 14:25:00.128: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 14:24:07 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-04 14:24:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-04 14:24:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 14:24:07 +0000 UTC  }]
Feb  4 14:25:00.128: INFO: 
Feb  4 14:25:00.128: INFO: StatefulSet ss has not reached scale 0, at 3
Feb  4 14:25:01.142: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb  4 14:25:01.142: INFO: ss-0  iruya-node                 Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 14:23:46 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-04 14:24:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-04 14:24:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 14:23:45 +0000 UTC  }]
Feb  4 14:25:01.142: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 14:24:07 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-04 14:24:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-04 14:24:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 14:24:07 +0000 UTC  }]
Feb  4 14:25:01.142: INFO: ss-2  iruya-node                 Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 14:24:07 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-04 14:24:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-04 14:24:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 14:24:07 +0000 UTC  }]
Feb  4 14:25:01.143: INFO: 
Feb  4 14:25:01.143: INFO: StatefulSet ss has not reached scale 0, at 3
Feb  4 14:25:02.254: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb  4 14:25:02.254: INFO: ss-0  iruya-node                 Pending  0s     [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 14:23:46 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-04 14:24:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-04 14:24:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 14:23:45 +0000 UTC  }]
Feb  4 14:25:02.254: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 14:24:07 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-04 14:24:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-04 14:24:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 14:24:07 +0000 UTC  }]
Feb  4 14:25:02.254: INFO: ss-2  iruya-node                 Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 14:24:07 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-04 14:24:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-04 14:24:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 14:24:07 +0000 UTC  }]
Feb  4 14:25:02.254: INFO: 
Feb  4 14:25:02.255: INFO: StatefulSet ss has not reached scale 0, at 3
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-6829
Feb  4 14:25:03.262: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6829 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  4 14:25:03.426: INFO: rc: 1
Feb  4 14:25:03.426: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6829 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    error: unable to upgrade connection: container not found ("nginx")
 []  0xc002e585d0 exit status 1   true [0xc003310028 0xc003310040 0xc003310058] [0xc003310028 0xc003310040 0xc003310058] [0xc003310038 0xc003310050] [0xba6c50 0xba6c50] 0xc0021c81e0 }:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("nginx")

error:
exit status 1
Feb  4 14:25:13.427: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6829 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  4 14:25:13.533: INFO: rc: 1
Feb  4 14:25:13.533: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6829 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc002a36ea0 exit status 1   true [0xc00070dd40 0xc00070de20 0xc00070dea0] [0xc00070dd40 0xc00070de20 0xc00070dea0] [0xc00070dde0 0xc00070de68] [0xba6c50 0xba6c50] 0xc00221ee40 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb  4 14:25:23.534: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6829 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  4 14:25:23.758: INFO: rc: 1
Feb  4 14:25:23.759: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6829 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc002e58690 exit status 1   true [0xc003310060 0xc003310078 0xc003310098] [0xc003310060 0xc003310078 0xc003310098] [0xc003310070 0xc003310088] [0xba6c50 0xba6c50] 0xc0021c94a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb  4 14:25:33.760: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6829 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  4 14:25:33.957: INFO: rc: 1
Feb  4 14:25:33.957: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6829 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc002c5c2a0 exit status 1   true [0xc0000f3e60 0xc0000f3ef0 0xc0000f3fe0] [0xc0000f3e60 0xc0000f3ef0 0xc0000f3fe0] [0xc0000f3ed8 0xc0000f3f78] [0xba6c50 0xba6c50] 0xc0019042a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb  4 14:25:43.958: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6829 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  4 14:25:44.075: INFO: rc: 1
Feb  4 14:25:44.075: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6829 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc002c5c360 exit status 1   true [0xc001ff0000 0xc001ff0018 0xc001ff0030] [0xc001ff0000 0xc001ff0018 0xc001ff0030] [0xc001ff0010 0xc001ff0028] [0xba6c50 0xba6c50] 0xc001904660 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb  4 14:25:54.076: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6829 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  4 14:25:54.192: INFO: rc: 1
Feb  4 14:25:54.192: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6829 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc002e58750 exit status 1   true [0xc0033100b8 0xc0033100f0 0xc003310130] [0xc0033100b8 0xc0033100f0 0xc003310130] [0xc0033100d8 0xc003310128] [0xba6c50 0xba6c50] 0xc002c64300 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb  4 14:26:04.193: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6829 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  4 14:26:04.324: INFO: rc: 1
Feb  4 14:26:04.324: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6829 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc002e58810 exit status 1   true [0xc003310138 0xc003310178 0xc0033101b8] [0xc003310138 0xc003310178 0xc0033101b8] [0xc003310158 0xc0033101a0] [0xba6c50 0xba6c50] 0xc002c646c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb  4 14:26:14.325: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6829 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  4 14:26:14.717: INFO: rc: 1
Feb  4 14:26:14.718: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6829 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc002fea690 exit status 1   true [0xc0006b9850 0xc0006b98c8 0xc0006b9a08] [0xc0006b9850 0xc0006b98c8 0xc0006b9a08] [0xc0006b98b8 0xc0006b99b8] [0xba6c50 0xba6c50] 0xc002368a80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb  4 14:26:24.718: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6829 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  4 14:26:24.877: INFO: rc: 1
Feb  4 14:26:24.877: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6829 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc000713950 exit status 1   true [0xc0000f23b0 0xc0000f2800 0xc0000f2a88] [0xc0000f23b0 0xc0000f2800 0xc0000f2a88] [0xc0000f27f8 0xc0000f29a0] [0xba6c50 0xba6c50] 0xc0021c8f00 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb  4 14:26:34.877: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6829 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  4 14:26:35.032: INFO: rc: 1
Feb  4 14:26:35.032: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6829 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0026480c0 exit status 1   true [0xc0006b88c8 0xc0006b8cc8 0xc0006b8f40] [0xc0006b88c8 0xc0006b8cc8 0xc0006b8f40] [0xc0006b8b80 0xc0006b8e20] [0xba6c50 0xba6c50] 0xc001ea5800 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb  4 14:26:45.032: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6829 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  4 14:26:45.203: INFO: rc: 1
Feb  4 14:26:45.203: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6829 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc002fea0c0 exit status 1   true [0xc00070d990 0xc00070dac0 0xc00070dc00] [0xc00070d990 0xc00070dac0 0xc00070dc00] [0xc00070d9e8 0xc00070dbe0] [0xba6c50 0xba6c50] 0xc001ee97a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb  4 14:26:55.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6829 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  4 14:26:55.357: INFO: rc: 1
Feb  4 14:26:55.357: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6829 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc002fea1b0 exit status 1   true [0xc00070dc10 0xc00070dd08 0xc00070dd80] [0xc00070dc10 0xc00070dd08 0xc00070dd80] [0xc00070dc80 0xc00070dd40] [0xba6c50 0xba6c50] 0xc0002af140 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb  4 14:27:05.357: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6829 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  4 14:27:05.549: INFO: rc: 1
Feb  4 14:27:05.549: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6829 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc002648180 exit status 1   true [0xc0006b90a0 0xc0006b9200 0xc0006b9448] [0xc0006b90a0 0xc0006b9200 0xc0006b9448] [0xc0006b9180 0xc0006b93f0] [0xba6c50 0xba6c50] 0xc0022f8b40 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb  4 14:27:15.550: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6829 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  4 14:27:15.668: INFO: rc: 1
Feb  4 14:27:15.669: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6829 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc002648240 exit status 1   true [0xc0006b9460 0xc0006b95c8 0xc0006b96b8] [0xc0006b9460 0xc0006b95c8 0xc0006b96b8] [0xc0006b94c0 0xc0006b9690] [0xba6c50 0xba6c50] 0xc001daeb40 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb  4 14:27:25.669: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6829 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  4 14:27:25.827: INFO: rc: 1
Feb  4 14:27:25.827: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6829 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc002fea2a0 exit status 1   true [0xc00070dde0 0xc00070de68 0xc00070dee0] [0xc00070dde0 0xc00070de68 0xc00070dee0] [0xc00070de38 0xc00070deb8] [0xba6c50 0xba6c50] 0xc001b40840 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb  4 14:27:35.828: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6829 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  4 14:27:36.105: INFO: rc: 1
Feb  4 14:27:36.105: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6829 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc000713a40 exit status 1   true [0xc0000f2b50 0xc0000f2d90 0xc0000f3020] [0xc0000f2b50 0xc0000f2d90 0xc0000f3020] [0xc0000f2c70 0xc0000f2ed8] [0xba6c50 0xba6c50] 0xc00160ad20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb  4 14:27:46.106: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6829 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  4 14:27:46.287: INFO: rc: 1
Feb  4 14:27:46.287: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6829 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc00031a1b0 exit status 1   true [0xc001ff0000 0xc001ff0018 0xc001ff0030] [0xc001ff0000 0xc001ff0018 0xc001ff0030] [0xc001ff0010 0xc001ff0028] [0xba6c50 0xba6c50] 0xc001ca0e40 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb  4 14:27:56.287: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6829 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  4 14:27:56.398: INFO: rc: 1
Feb  4 14:27:56.398: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6829 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc00031a8a0 exit status 1   true [0xc001ff0038 0xc001ff0050 0xc001ff0068] [0xc001ff0038 0xc001ff0050 0xc001ff0068] [0xc001ff0048 0xc001ff0060] [0xba6c50 0xba6c50] 0xc0019c0780 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb  4 14:28:06.398: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6829 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  4 14:28:06.648: INFO: rc: 1
Feb  4 14:28:06.648: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6829 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc00031aa80 exit status 1   true [0xc001ff0070 0xc001ff0088 0xc001ff00a0] [0xc001ff0070 0xc001ff0088 0xc001ff00a0] [0xc001ff0080 0xc001ff0098] [0xba6c50 0xba6c50] 0xc001eb4600 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb  4 14:28:16.648: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6829 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  4 14:28:16.791: INFO: rc: 1
Feb  4 14:28:16.792: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6829 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc00031aba0 exit status 1   true [0xc001ff00a8 0xc001ff00c0 0xc001ff00d8] [0xc001ff00a8 0xc001ff00c0 0xc001ff00d8] [0xc001ff00b8 0xc001ff00d0] [0xba6c50 0xba6c50] 0xc001e0c480 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb  4 14:28:26.792: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6829 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  4 14:28:26.972: INFO: rc: 1
Feb  4 14:28:26.972: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6829 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc002fea090 exit status 1   true [0xc00070d990 0xc00070dac0 0xc00070dc00] [0xc00070d990 0xc00070dac0 0xc00070dc00] [0xc00070d9e8 0xc00070dbe0] [0xba6c50 0xba6c50] 0xc001eb5a40 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb  4 14:28:36.973: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6829 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  4 14:28:37.154: INFO: rc: 1
Feb  4 14:28:37.154: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6829 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc00031a150 exit status 1   true [0xc001ff0000 0xc001ff0018 0xc001ff0030] [0xc001ff0000 0xc001ff0018 0xc001ff0030] [0xc001ff0010 0xc001ff0028] [0xba6c50 0xba6c50] 0xc001d935c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb  4 14:28:47.154: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6829 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  4 14:28:49.085: INFO: rc: 1
Feb  4 14:28:49.085: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6829 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc00031a2a0 exit status 1   true [0xc001ff0038 0xc001ff0050 0xc001ff0068] [0xc001ff0038 0xc001ff0050 0xc001ff0068] [0xc001ff0048 0xc001ff0060] [0xba6c50 0xba6c50] 0xc001bbf3e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb  4 14:28:59.086: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6829 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  4 14:28:59.236: INFO: rc: 1
Feb  4 14:28:59.237: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6829 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc002648090 exit status 1   true [0xc0006b88c8 0xc0006b8cc8 0xc0006b8f40] [0xc0006b88c8 0xc0006b8cc8 0xc0006b8f40] [0xc0006b8b80 0xc0006b8e20] [0xba6c50 0xba6c50] 0xc001b4a6c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb  4 14:29:09.237: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6829 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  4 14:29:09.387: INFO: rc: 1
Feb  4 14:29:09.388: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6829 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc00031aab0 exit status 1   true [0xc001ff0070 0xc001ff0088 0xc001ff00a0] [0xc001ff0070 0xc001ff0088 0xc001ff00a0] [0xc001ff0080 0xc001ff0098] [0xba6c50 0xba6c50] 0xc0022f9e60 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb  4 14:29:19.388: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6829 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  4 14:29:19.569: INFO: rc: 1
Feb  4 14:29:19.570: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6829 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc002fea270 exit status 1   true [0xc00070dc10 0xc00070dd08 0xc00070dd80] [0xc00070dc10 0xc00070dd08 0xc00070dd80] [0xc00070dc80 0xc00070dd40] [0xba6c50 0xba6c50] 0xc0000ba120 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb  4 14:29:29.570: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6829 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  4 14:29:29.748: INFO: rc: 1
Feb  4 14:29:29.748: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6829 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc002fea390 exit status 1   true [0xc00070dde0 0xc00070de68 0xc00070dee0] [0xc00070dde0 0xc00070de68 0xc00070dee0] [0xc00070de38 0xc00070deb8] [0xba6c50 0xba6c50] 0xc0020bb560 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb  4 14:29:39.749: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6829 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  4 14:29:39.893: INFO: rc: 1
Feb  4 14:29:39.893: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6829 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc00031ac60 exit status 1   true [0xc001ff00a8 0xc001ff00c0 0xc001ff00d8] [0xc001ff00a8 0xc001ff00c0 0xc001ff00d8] [0xc001ff00b8 0xc001ff00d0] [0xba6c50 0xba6c50] 0xc002395bc0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb  4 14:29:49.893: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6829 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  4 14:29:50.045: INFO: rc: 1
Feb  4 14:29:50.045: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6829 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc002fea480 exit status 1   true [0xc00070df18 0xc00070dfc8 0xc0000f23b0] [0xc00070df18 0xc00070dfc8 0xc0000f23b0] [0xc00070df78 0xc00070dfe8] [0xba6c50 0xba6c50] 0xc0021c8f00 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb  4 14:30:00.046: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6829 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  4 14:30:00.174: INFO: rc: 1
Feb  4 14:30:00.174: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6829 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc002fea570 exit status 1   true [0xc0000f27f0 0xc0000f2808 0xc0000f2b50] [0xc0000f27f0 0xc0000f2808 0xc0000f2b50] [0xc0000f2800 0xc0000f2a88] [0xba6c50 0xba6c50] 0xc001e0c480 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb  4 14:30:10.175: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6829 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  4 14:30:10.372: INFO: rc: 1
Feb  4 14:30:10.373: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: 
Feb  4 14:30:10.373: INFO: Scaling statefulset ss to 0
Feb  4 14:30:10.389: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Feb  4 14:30:10.392: INFO: Deleting all statefulset in ns statefulset-6829
Feb  4 14:30:10.394: INFO: Scaling statefulset ss to 0
Feb  4 14:30:10.405: INFO: Waiting for statefulset status.replicas updated to 0
Feb  4 14:30:10.407: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 14:30:10.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-6829" for this suite.
Feb  4 14:30:16.551: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 14:30:16.672: INFO: namespace statefulset-6829 deletion completed in 6.171904832s

• [SLOW TEST:390.892 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Burst scaling should run to completion even with unhealthy pods [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run --rm job 
  should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  4 14:30:16.673: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: executing a command with run --rm and attach with stdin
Feb  4 14:30:16.796: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2251 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed''
Feb  4 14:30:24.817: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0204 14:30:23.525664    2554 log.go:172] (0xc000a02630) (0xc00099efa0) Create stream\nI0204 14:30:23.525951    2554 log.go:172] (0xc000a02630) (0xc00099efa0) Stream added, broadcasting: 1\nI0204 14:30:23.552267    2554 log.go:172] (0xc000a02630) Reply frame received for 1\nI0204 14:30:23.552424    2554 log.go:172] (0xc000a02630) (0xc00099e000) Create stream\nI0204 14:30:23.552463    2554 log.go:172] (0xc000a02630) (0xc00099e000) Stream added, broadcasting: 3\nI0204 14:30:23.555289    2554 log.go:172] (0xc000a02630) Reply frame received for 3\nI0204 14:30:23.555391    2554 log.go:172] (0xc000a02630) (0xc000020000) Create stream\nI0204 14:30:23.555414    2554 log.go:172] (0xc000a02630) (0xc000020000) Stream added, broadcasting: 5\nI0204 14:30:23.556838    2554 log.go:172] (0xc000a02630) Reply frame received for 5\nI0204 14:30:23.556928    2554 log.go:172] (0xc000a02630) (0xc0000ec000) Create stream\nI0204 14:30:23.556953    2554 log.go:172] (0xc000a02630) (0xc0000ec000) Stream added, broadcasting: 7\nI0204 14:30:23.559261    2554 log.go:172] (0xc000a02630) Reply frame received for 7\nI0204 14:30:23.559855    2554 log.go:172] (0xc00099e000) (3) Writing data frame\nI0204 14:30:23.560207    2554 log.go:172] (0xc00099e000) (3) Writing data frame\nI0204 14:30:23.575249    2554 log.go:172] (0xc000a02630) Data frame received for 5\nI0204 14:30:23.575310    2554 log.go:172] (0xc000020000) (5) Data frame handling\nI0204 14:30:23.575330    2554 log.go:172] (0xc000020000) (5) Data frame sent\nI0204 14:30:23.578110    2554 log.go:172] (0xc000a02630) Data frame received for 5\nI0204 14:30:23.578132    2554 log.go:172] (0xc000020000) (5) Data frame handling\nI0204 14:30:23.578148    2554 log.go:172] (0xc000020000) (5) Data frame sent\nI0204 14:30:24.754302    2554 log.go:172] (0xc000a02630) Data frame received for 1\nI0204 14:30:24.754750    2554 log.go:172] (0xc00099efa0) (1) Data frame handling\nI0204 14:30:24.754829    2554 log.go:172] (0xc00099efa0) (1) Data frame sent\nI0204 14:30:24.754948    2554 log.go:172] (0xc000a02630) (0xc00099efa0) Stream removed, broadcasting: 1\nI0204 14:30:24.758001    2554 log.go:172] (0xc000a02630) (0xc0000ec000) Stream removed, broadcasting: 7\nI0204 14:30:24.758253    2554 log.go:172] (0xc000a02630) (0xc000020000) Stream removed, broadcasting: 5\nI0204 14:30:24.758402    2554 log.go:172] (0xc000a02630) (0xc00099e000) Stream removed, broadcasting: 3\nI0204 14:30:24.758534    2554 log.go:172] (0xc000a02630) Go away received\nI0204 14:30:24.758832    2554 log.go:172] (0xc000a02630) (0xc00099efa0) Stream removed, broadcasting: 1\nI0204 14:30:24.758974    2554 log.go:172] (0xc000a02630) (0xc00099e000) Stream removed, broadcasting: 3\nI0204 14:30:24.759094    2554 log.go:172] (0xc000a02630) (0xc000020000) Stream removed, broadcasting: 5\nI0204 14:30:24.759130    2554 log.go:172] (0xc000a02630) (0xc0000ec000) Stream removed, broadcasting: 7\n"
Feb  4 14:30:24.817: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n"
STEP: verifying the job e2e-test-rm-busybox-job was deleted
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 14:30:26.850: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2251" for this suite.
Feb  4 14:30:32.903: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 14:30:33.019: INFO: namespace kubectl-2251 deletion completed in 6.154418792s

• [SLOW TEST:16.347 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run --rm job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a job from an image, then delete the job  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  4 14:30:33.020: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test use defaults
Feb  4 14:30:33.357: INFO: Waiting up to 5m0s for pod "client-containers-3b154376-2e66-42d6-87eb-e86eb5b661a0" in namespace "containers-658" to be "success or failure"
Feb  4 14:30:33.366: INFO: Pod "client-containers-3b154376-2e66-42d6-87eb-e86eb5b661a0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.548757ms
Feb  4 14:30:35.379: INFO: Pod "client-containers-3b154376-2e66-42d6-87eb-e86eb5b661a0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021734265s
Feb  4 14:30:37.387: INFO: Pod "client-containers-3b154376-2e66-42d6-87eb-e86eb5b661a0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029332656s
Feb  4 14:30:39.422: INFO: Pod "client-containers-3b154376-2e66-42d6-87eb-e86eb5b661a0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.064759088s
Feb  4 14:30:41.436: INFO: Pod "client-containers-3b154376-2e66-42d6-87eb-e86eb5b661a0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.078807084s
STEP: Saw pod success
Feb  4 14:30:41.436: INFO: Pod "client-containers-3b154376-2e66-42d6-87eb-e86eb5b661a0" satisfied condition "success or failure"
Feb  4 14:30:41.443: INFO: Trying to get logs from node iruya-node pod client-containers-3b154376-2e66-42d6-87eb-e86eb5b661a0 container test-container: 
STEP: delete the pod
Feb  4 14:30:41.511: INFO: Waiting for pod client-containers-3b154376-2e66-42d6-87eb-e86eb5b661a0 to disappear
Feb  4 14:30:41.521: INFO: Pod client-containers-3b154376-2e66-42d6-87eb-e86eb5b661a0 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 14:30:41.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-658" for this suite.
Feb  4 14:30:47.628: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 14:30:47.760: INFO: namespace containers-658 deletion completed in 6.231393127s

• [SLOW TEST:14.740 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  4 14:30:47.760: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on node default medium
Feb  4 14:30:47.860: INFO: Waiting up to 5m0s for pod "pod-851d9f10-2ceb-4c66-b02b-302409e81803" in namespace "emptydir-8704" to be "success or failure"
Feb  4 14:30:47.922: INFO: Pod "pod-851d9f10-2ceb-4c66-b02b-302409e81803": Phase="Pending", Reason="", readiness=false. Elapsed: 61.42873ms
Feb  4 14:30:49.931: INFO: Pod "pod-851d9f10-2ceb-4c66-b02b-302409e81803": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070416125s
Feb  4 14:30:51.943: INFO: Pod "pod-851d9f10-2ceb-4c66-b02b-302409e81803": Phase="Pending", Reason="", readiness=false. Elapsed: 4.082608054s
Feb  4 14:30:53.963: INFO: Pod "pod-851d9f10-2ceb-4c66-b02b-302409e81803": Phase="Pending", Reason="", readiness=false. Elapsed: 6.102558854s
Feb  4 14:30:55.975: INFO: Pod "pod-851d9f10-2ceb-4c66-b02b-302409e81803": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.115383203s
STEP: Saw pod success
Feb  4 14:30:55.976: INFO: Pod "pod-851d9f10-2ceb-4c66-b02b-302409e81803" satisfied condition "success or failure"
Feb  4 14:30:55.981: INFO: Trying to get logs from node iruya-node pod pod-851d9f10-2ceb-4c66-b02b-302409e81803 container test-container: 
STEP: delete the pod
Feb  4 14:30:56.067: INFO: Waiting for pod pod-851d9f10-2ceb-4c66-b02b-302409e81803 to disappear
Feb  4 14:30:56.085: INFO: Pod pod-851d9f10-2ceb-4c66-b02b-302409e81803 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 14:30:56.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8704" for this suite.
Feb  4 14:31:02.285: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 14:31:02.377: INFO: namespace emptydir-8704 deletion completed in 6.287527349s

• [SLOW TEST:14.616 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  4 14:31:02.377: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Feb  4 14:31:02.527: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 14:31:16.638: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-5306" for this suite.
Feb  4 14:31:22.675: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 14:31:22.824: INFO: namespace init-container-5306 deletion completed in 6.171483996s

• [SLOW TEST:20.448 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  4 14:31:22.825: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb  4 14:31:22.905: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1b29c356-2ddf-4047-a22d-3f0f5b7c54b2" in namespace "projected-7940" to be "success or failure"
Feb  4 14:31:22.912: INFO: Pod "downwardapi-volume-1b29c356-2ddf-4047-a22d-3f0f5b7c54b2": Phase="Pending", Reason="", readiness=false. Elapsed: 7.653259ms
Feb  4 14:31:24.920: INFO: Pod "downwardapi-volume-1b29c356-2ddf-4047-a22d-3f0f5b7c54b2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015400276s
Feb  4 14:31:26.928: INFO: Pod "downwardapi-volume-1b29c356-2ddf-4047-a22d-3f0f5b7c54b2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023075635s
Feb  4 14:31:28.936: INFO: Pod "downwardapi-volume-1b29c356-2ddf-4047-a22d-3f0f5b7c54b2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.031569486s
Feb  4 14:31:30.966: INFO: Pod "downwardapi-volume-1b29c356-2ddf-4047-a22d-3f0f5b7c54b2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.061445594s
STEP: Saw pod success
Feb  4 14:31:30.966: INFO: Pod "downwardapi-volume-1b29c356-2ddf-4047-a22d-3f0f5b7c54b2" satisfied condition "success or failure"
Feb  4 14:31:30.971: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-1b29c356-2ddf-4047-a22d-3f0f5b7c54b2 container client-container: 
STEP: delete the pod
Feb  4 14:31:31.015: INFO: Waiting for pod downwardapi-volume-1b29c356-2ddf-4047-a22d-3f0f5b7c54b2 to disappear
Feb  4 14:31:31.036: INFO: Pod downwardapi-volume-1b29c356-2ddf-4047-a22d-3f0f5b7c54b2 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 14:31:31.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7940" for this suite.
Feb  4 14:31:37.065: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 14:31:37.175: INFO: namespace projected-7940 deletion completed in 6.132541519s

• [SLOW TEST:14.350 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  4 14:31:37.176: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb  4 14:31:37.312: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3d020bae-f034-47ce-8310-36a0e9f772a8" in namespace "downward-api-5041" to be "success or failure"
Feb  4 14:31:37.347: INFO: Pod "downwardapi-volume-3d020bae-f034-47ce-8310-36a0e9f772a8": Phase="Pending", Reason="", readiness=false. Elapsed: 34.315932ms
Feb  4 14:31:39.365: INFO: Pod "downwardapi-volume-3d020bae-f034-47ce-8310-36a0e9f772a8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052526743s
Feb  4 14:31:41.382: INFO: Pod "downwardapi-volume-3d020bae-f034-47ce-8310-36a0e9f772a8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.06974487s
Feb  4 14:31:43.398: INFO: Pod "downwardapi-volume-3d020bae-f034-47ce-8310-36a0e9f772a8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.085009349s
Feb  4 14:31:45.416: INFO: Pod "downwardapi-volume-3d020bae-f034-47ce-8310-36a0e9f772a8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.103523462s
Feb  4 14:31:47.432: INFO: Pod "downwardapi-volume-3d020bae-f034-47ce-8310-36a0e9f772a8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.11911485s
STEP: Saw pod success
Feb  4 14:31:47.432: INFO: Pod "downwardapi-volume-3d020bae-f034-47ce-8310-36a0e9f772a8" satisfied condition "success or failure"
Feb  4 14:31:47.439: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-3d020bae-f034-47ce-8310-36a0e9f772a8 container client-container: 
STEP: delete the pod
Feb  4 14:31:47.502: INFO: Waiting for pod downwardapi-volume-3d020bae-f034-47ce-8310-36a0e9f772a8 to disappear
Feb  4 14:31:47.508: INFO: Pod downwardapi-volume-3d020bae-f034-47ce-8310-36a0e9f772a8 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 14:31:47.508: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5041" for this suite.
Feb  4 14:31:53.551: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 14:31:53.690: INFO: namespace downward-api-5041 deletion completed in 6.176487157s

• [SLOW TEST:16.514 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  4 14:31:53.690: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 14:32:04.842: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-6857" for this suite.
Feb  4 14:32:42.903: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 14:32:43.002: INFO: namespace replication-controller-6857 deletion completed in 38.149300998s

• [SLOW TEST:49.312 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  4 14:32:43.003: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb  4 14:32:43.066: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 14:32:51.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-1798" for this suite.
Feb  4 14:33:43.238: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 14:33:43.356: INFO: namespace pods-1798 deletion completed in 52.146866371s

• [SLOW TEST:60.353 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  4 14:33:43.356: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-5657187d-3a9b-49bb-ac6b-40b90d019448
STEP: Creating a pod to test consume configMaps
Feb  4 14:33:43.515: INFO: Waiting up to 5m0s for pod "pod-configmaps-e53f9295-bd37-42fc-9960-f950c78591a4" in namespace "configmap-8049" to be "success or failure"
Feb  4 14:33:43.531: INFO: Pod "pod-configmaps-e53f9295-bd37-42fc-9960-f950c78591a4": Phase="Pending", Reason="", readiness=false. Elapsed: 15.794306ms
Feb  4 14:33:45.541: INFO: Pod "pod-configmaps-e53f9295-bd37-42fc-9960-f950c78591a4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025589604s
Feb  4 14:33:47.552: INFO: Pod "pod-configmaps-e53f9295-bd37-42fc-9960-f950c78591a4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037383474s
Feb  4 14:33:49.572: INFO: Pod "pod-configmaps-e53f9295-bd37-42fc-9960-f950c78591a4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.056581231s
Feb  4 14:33:51.585: INFO: Pod "pod-configmaps-e53f9295-bd37-42fc-9960-f950c78591a4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.070184728s
Feb  4 14:33:53.597: INFO: Pod "pod-configmaps-e53f9295-bd37-42fc-9960-f950c78591a4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.082512188s
STEP: Saw pod success
Feb  4 14:33:53.598: INFO: Pod "pod-configmaps-e53f9295-bd37-42fc-9960-f950c78591a4" satisfied condition "success or failure"
Feb  4 14:33:53.602: INFO: Trying to get logs from node iruya-node pod pod-configmaps-e53f9295-bd37-42fc-9960-f950c78591a4 container configmap-volume-test: 
STEP: delete the pod
Feb  4 14:33:53.767: INFO: Waiting for pod pod-configmaps-e53f9295-bd37-42fc-9960-f950c78591a4 to disappear
Feb  4 14:33:53.777: INFO: Pod pod-configmaps-e53f9295-bd37-42fc-9960-f950c78591a4 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 14:33:53.777: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8049" for this suite.
Feb  4 14:33:59.833: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 14:33:59.964: INFO: namespace configmap-8049 deletion completed in 6.164946238s

• [SLOW TEST:16.607 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  4 14:33:59.964: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on tmpfs
Feb  4 14:34:00.132: INFO: Waiting up to 5m0s for pod "pod-8a200391-9fbe-4129-a5c9-a09df7c6f515" in namespace "emptydir-2652" to be "success or failure"
Feb  4 14:34:00.143: INFO: Pod "pod-8a200391-9fbe-4129-a5c9-a09df7c6f515": Phase="Pending", Reason="", readiness=false. Elapsed: 10.757099ms
Feb  4 14:34:02.191: INFO: Pod "pod-8a200391-9fbe-4129-a5c9-a09df7c6f515": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058523937s
Feb  4 14:34:04.234: INFO: Pod "pod-8a200391-9fbe-4129-a5c9-a09df7c6f515": Phase="Pending", Reason="", readiness=false. Elapsed: 4.101131433s
Feb  4 14:34:06.293: INFO: Pod "pod-8a200391-9fbe-4129-a5c9-a09df7c6f515": Phase="Pending", Reason="", readiness=false. Elapsed: 6.160125045s
Feb  4 14:34:08.301: INFO: Pod "pod-8a200391-9fbe-4129-a5c9-a09df7c6f515": Phase="Pending", Reason="", readiness=false. Elapsed: 8.16883883s
Feb  4 14:34:10.308: INFO: Pod "pod-8a200391-9fbe-4129-a5c9-a09df7c6f515": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.175359885s
STEP: Saw pod success
Feb  4 14:34:10.308: INFO: Pod "pod-8a200391-9fbe-4129-a5c9-a09df7c6f515" satisfied condition "success or failure"
Feb  4 14:34:10.313: INFO: Trying to get logs from node iruya-node pod pod-8a200391-9fbe-4129-a5c9-a09df7c6f515 container test-container: 
STEP: delete the pod
Feb  4 14:34:10.405: INFO: Waiting for pod pod-8a200391-9fbe-4129-a5c9-a09df7c6f515 to disappear
Feb  4 14:34:10.430: INFO: Pod pod-8a200391-9fbe-4129-a5c9-a09df7c6f515 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 14:34:10.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2652" for this suite.
Feb  4 14:34:16.495: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 14:34:16.649: INFO: namespace emptydir-2652 deletion completed in 6.208675125s

• [SLOW TEST:16.685 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  4 14:34:16.649: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Feb  4 14:34:16.690: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 14:34:33.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-8026" for this suite.
Feb  4 14:34:55.913: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 14:34:56.065: INFO: namespace init-container-8026 deletion completed in 22.183453167s

• [SLOW TEST:39.416 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  4 14:34:56.066: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: getting the auto-created API token
STEP: reading a file in the container
Feb  4 14:35:04.757: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-325 pod-service-account-c66f6888-b04b-4c2e-bd76-56ff53da1df2 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token'
STEP: reading a file in the container
Feb  4 14:35:05.374: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-325 pod-service-account-c66f6888-b04b-4c2e-bd76-56ff53da1df2 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt'
STEP: reading a file in the container
Feb  4 14:35:05.809: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-325 pod-service-account-c66f6888-b04b-4c2e-bd76-56ff53da1df2 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace'
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 14:35:06.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-325" for this suite.
Feb  4 14:35:12.221: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 14:35:12.313: INFO: namespace svcaccounts-325 deletion completed in 6.119427127s

• [SLOW TEST:16.247 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  4 14:35:12.313: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-configmap-szg7
STEP: Creating a pod to test atomic-volume-subpath
Feb  4 14:35:12.399: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-szg7" in namespace "subpath-1703" to be "success or failure"
Feb  4 14:35:12.405: INFO: Pod "pod-subpath-test-configmap-szg7": Phase="Pending", Reason="", readiness=false. Elapsed: 5.558568ms
Feb  4 14:35:14.413: INFO: Pod "pod-subpath-test-configmap-szg7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014024107s
Feb  4 14:35:16.421: INFO: Pod "pod-subpath-test-configmap-szg7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022335389s
Feb  4 14:35:18.437: INFO: Pod "pod-subpath-test-configmap-szg7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.038292443s
Feb  4 14:35:20.447: INFO: Pod "pod-subpath-test-configmap-szg7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.048263751s
Feb  4 14:35:22.460: INFO: Pod "pod-subpath-test-configmap-szg7": Phase="Running", Reason="", readiness=true. Elapsed: 10.060857828s
Feb  4 14:35:24.481: INFO: Pod "pod-subpath-test-configmap-szg7": Phase="Running", Reason="", readiness=true. Elapsed: 12.081701744s
Feb  4 14:35:26.501: INFO: Pod "pod-subpath-test-configmap-szg7": Phase="Running", Reason="", readiness=true. Elapsed: 14.10189302s
Feb  4 14:35:28.516: INFO: Pod "pod-subpath-test-configmap-szg7": Phase="Running", Reason="", readiness=true. Elapsed: 16.11675183s
Feb  4 14:35:30.537: INFO: Pod "pod-subpath-test-configmap-szg7": Phase="Running", Reason="", readiness=true. Elapsed: 18.138050727s
Feb  4 14:35:32.552: INFO: Pod "pod-subpath-test-configmap-szg7": Phase="Running", Reason="", readiness=true. Elapsed: 20.153217955s
Feb  4 14:35:34.573: INFO: Pod "pod-subpath-test-configmap-szg7": Phase="Running", Reason="", readiness=true. Elapsed: 22.173852482s
Feb  4 14:35:36.585: INFO: Pod "pod-subpath-test-configmap-szg7": Phase="Running", Reason="", readiness=true. Elapsed: 24.185763528s
Feb  4 14:35:38.595: INFO: Pod "pod-subpath-test-configmap-szg7": Phase="Running", Reason="", readiness=true. Elapsed: 26.195522792s
Feb  4 14:35:40.606: INFO: Pod "pod-subpath-test-configmap-szg7": Phase="Running", Reason="", readiness=true. Elapsed: 28.206928873s
Feb  4 14:35:42.620: INFO: Pod "pod-subpath-test-configmap-szg7": Phase="Running", Reason="", readiness=true. Elapsed: 30.221403051s
Feb  4 14:35:44.629: INFO: Pod "pod-subpath-test-configmap-szg7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 32.229660015s
STEP: Saw pod success
Feb  4 14:35:44.629: INFO: Pod "pod-subpath-test-configmap-szg7" satisfied condition "success or failure"
Feb  4 14:35:44.632: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-configmap-szg7 container test-container-subpath-configmap-szg7: 
STEP: delete the pod
Feb  4 14:35:44.711: INFO: Waiting for pod pod-subpath-test-configmap-szg7 to disappear
Feb  4 14:35:44.716: INFO: Pod pod-subpath-test-configmap-szg7 no longer exists
STEP: Deleting pod pod-subpath-test-configmap-szg7
Feb  4 14:35:44.716: INFO: Deleting pod "pod-subpath-test-configmap-szg7" in namespace "subpath-1703"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 14:35:44.718: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-1703" for this suite.
Feb  4 14:35:50.748: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 14:35:50.896: INFO: namespace subpath-1703 deletion completed in 6.174970252s

• [SLOW TEST:38.583 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  4 14:35:50.896: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the initial replication controller
Feb  4 14:35:50.959: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9282'
Feb  4 14:35:51.582: INFO: stderr: ""
Feb  4 14:35:51.582: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb  4 14:35:51.582: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9282'
Feb  4 14:35:51.796: INFO: stderr: ""
Feb  4 14:35:51.796: INFO: stdout: "update-demo-nautilus-t8tft update-demo-nautilus-zpw8v "
Feb  4 14:35:51.797: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-t8tft -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9282'
Feb  4 14:35:51.914: INFO: stderr: ""
Feb  4 14:35:51.914: INFO: stdout: ""
Feb  4 14:35:51.914: INFO: update-demo-nautilus-t8tft is created but not running
Feb  4 14:35:56.914: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9282'
Feb  4 14:35:57.437: INFO: stderr: ""
Feb  4 14:35:57.437: INFO: stdout: "update-demo-nautilus-t8tft update-demo-nautilus-zpw8v "
Feb  4 14:35:57.437: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-t8tft -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9282'
Feb  4 14:35:57.708: INFO: stderr: ""
Feb  4 14:35:57.708: INFO: stdout: ""
Feb  4 14:35:57.708: INFO: update-demo-nautilus-t8tft is created but not running
Feb  4 14:36:02.709: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9282'
Feb  4 14:36:02.803: INFO: stderr: ""
Feb  4 14:36:02.803: INFO: stdout: "update-demo-nautilus-t8tft update-demo-nautilus-zpw8v "
Feb  4 14:36:02.804: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-t8tft -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9282'
Feb  4 14:36:02.967: INFO: stderr: ""
Feb  4 14:36:02.967: INFO: stdout: "true"
Feb  4 14:36:02.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-t8tft -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9282'
Feb  4 14:36:03.112: INFO: stderr: ""
Feb  4 14:36:03.112: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  4 14:36:03.112: INFO: validating pod update-demo-nautilus-t8tft
Feb  4 14:36:03.124: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  4 14:36:03.124: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  4 14:36:03.124: INFO: update-demo-nautilus-t8tft is verified up and running
Feb  4 14:36:03.125: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zpw8v -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9282'
Feb  4 14:36:03.196: INFO: stderr: ""
Feb  4 14:36:03.196: INFO: stdout: "true"
Feb  4 14:36:03.196: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zpw8v -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9282'
Feb  4 14:36:03.297: INFO: stderr: ""
Feb  4 14:36:03.297: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  4 14:36:03.297: INFO: validating pod update-demo-nautilus-zpw8v
Feb  4 14:36:03.309: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  4 14:36:03.309: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  4 14:36:03.309: INFO: update-demo-nautilus-zpw8v is verified up and running
STEP: rolling-update to new replication controller
Feb  4 14:36:03.313: INFO: scanned /root for discovery docs: 
Feb  4 14:36:03.313: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-9282'
Feb  4 14:36:35.380: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Feb  4 14:36:35.381: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb  4 14:36:35.381: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9282'
Feb  4 14:36:35.703: INFO: stderr: ""
Feb  4 14:36:35.703: INFO: stdout: "update-demo-kitten-2f45x update-demo-kitten-j9t8n update-demo-nautilus-zpw8v "
STEP: Replicas for name=update-demo: expected=2 actual=3
Feb  4 14:36:40.703: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9282'
Feb  4 14:36:40.837: INFO: stderr: ""
Feb  4 14:36:40.838: INFO: stdout: "update-demo-kitten-2f45x update-demo-kitten-j9t8n "
Feb  4 14:36:40.838: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-2f45x -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9282'
Feb  4 14:36:40.940: INFO: stderr: ""
Feb  4 14:36:40.940: INFO: stdout: "true"
Feb  4 14:36:40.940: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-2f45x -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9282'
Feb  4 14:36:41.034: INFO: stderr: ""
Feb  4 14:36:41.034: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Feb  4 14:36:41.034: INFO: validating pod update-demo-kitten-2f45x
Feb  4 14:36:41.065: INFO: got data: {
  "image": "kitten.jpg"
}

Feb  4 14:36:41.065: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Feb  4 14:36:41.065: INFO: update-demo-kitten-2f45x is verified up and running
Feb  4 14:36:41.065: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-j9t8n -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9282'
Feb  4 14:36:41.205: INFO: stderr: ""
Feb  4 14:36:41.205: INFO: stdout: "true"
Feb  4 14:36:41.205: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-j9t8n -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9282'
Feb  4 14:36:41.296: INFO: stderr: ""
Feb  4 14:36:41.296: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Feb  4 14:36:41.296: INFO: validating pod update-demo-kitten-j9t8n
Feb  4 14:36:41.305: INFO: got data: {
  "image": "kitten.jpg"
}

Feb  4 14:36:41.305: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Feb  4 14:36:41.305: INFO: update-demo-kitten-j9t8n is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 14:36:41.305: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9282" for this suite.
Feb  4 14:37:03.330: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 14:37:03.421: INFO: namespace kubectl-9282 deletion completed in 22.110870388s

• [SLOW TEST:72.525 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should do a rolling update of a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  4 14:37:03.422: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Feb  4 14:37:12.695: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 14:37:12.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-3743" for this suite.
Feb  4 14:37:18.786: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 14:37:18.924: INFO: namespace container-runtime-3743 deletion completed in 6.184367238s

• [SLOW TEST:15.502 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  4 14:37:18.924: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name projected-secret-test-978b489f-5c8f-436b-9b19-cee0da49b032
STEP: Creating a pod to test consume secrets
Feb  4 14:37:19.107: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-37d87e09-85f3-4ef9-8f3f-f682dd731d82" in namespace "projected-8318" to be "success or failure"
Feb  4 14:37:19.117: INFO: Pod "pod-projected-secrets-37d87e09-85f3-4ef9-8f3f-f682dd731d82": Phase="Pending", Reason="", readiness=false. Elapsed: 10.525973ms
Feb  4 14:37:21.918: INFO: Pod "pod-projected-secrets-37d87e09-85f3-4ef9-8f3f-f682dd731d82": Phase="Pending", Reason="", readiness=false. Elapsed: 2.811185071s
Feb  4 14:37:23.937: INFO: Pod "pod-projected-secrets-37d87e09-85f3-4ef9-8f3f-f682dd731d82": Phase="Pending", Reason="", readiness=false. Elapsed: 4.829746442s
Feb  4 14:37:25.945: INFO: Pod "pod-projected-secrets-37d87e09-85f3-4ef9-8f3f-f682dd731d82": Phase="Pending", Reason="", readiness=false. Elapsed: 6.838565693s
Feb  4 14:37:27.957: INFO: Pod "pod-projected-secrets-37d87e09-85f3-4ef9-8f3f-f682dd731d82": Phase="Running", Reason="", readiness=true. Elapsed: 8.85022002s
Feb  4 14:37:29.968: INFO: Pod "pod-projected-secrets-37d87e09-85f3-4ef9-8f3f-f682dd731d82": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.861142345s
STEP: Saw pod success
Feb  4 14:37:29.968: INFO: Pod "pod-projected-secrets-37d87e09-85f3-4ef9-8f3f-f682dd731d82" satisfied condition "success or failure"
Feb  4 14:37:29.976: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-37d87e09-85f3-4ef9-8f3f-f682dd731d82 container secret-volume-test: 
STEP: delete the pod
Feb  4 14:37:30.041: INFO: Waiting for pod pod-projected-secrets-37d87e09-85f3-4ef9-8f3f-f682dd731d82 to disappear
Feb  4 14:37:30.050: INFO: Pod pod-projected-secrets-37d87e09-85f3-4ef9-8f3f-f682dd731d82 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 14:37:30.051: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8318" for this suite.
Feb  4 14:37:36.135: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 14:37:36.273: INFO: namespace projected-8318 deletion completed in 6.216417608s

• [SLOW TEST:17.349 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  4 14:37:36.273: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1292
STEP: creating an rc
Feb  4 14:37:36.357: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9625'
Feb  4 14:37:36.760: INFO: stderr: ""
Feb  4 14:37:36.760: INFO: stdout: "replicationcontroller/redis-master created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Waiting for Redis master to start.
Feb  4 14:37:37.769: INFO: Selector matched 1 pods for map[app:redis]
Feb  4 14:37:37.769: INFO: Found 0 / 1
Feb  4 14:37:38.778: INFO: Selector matched 1 pods for map[app:redis]
Feb  4 14:37:38.779: INFO: Found 0 / 1
Feb  4 14:37:39.773: INFO: Selector matched 1 pods for map[app:redis]
Feb  4 14:37:39.773: INFO: Found 0 / 1
Feb  4 14:37:40.773: INFO: Selector matched 1 pods for map[app:redis]
Feb  4 14:37:40.773: INFO: Found 0 / 1
Feb  4 14:37:41.770: INFO: Selector matched 1 pods for map[app:redis]
Feb  4 14:37:41.770: INFO: Found 0 / 1
Feb  4 14:37:42.769: INFO: Selector matched 1 pods for map[app:redis]
Feb  4 14:37:42.770: INFO: Found 0 / 1
Feb  4 14:37:43.774: INFO: Selector matched 1 pods for map[app:redis]
Feb  4 14:37:43.775: INFO: Found 0 / 1
Feb  4 14:37:44.773: INFO: Selector matched 1 pods for map[app:redis]
Feb  4 14:37:44.773: INFO: Found 1 / 1
Feb  4 14:37:44.773: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Feb  4 14:37:44.779: INFO: Selector matched 1 pods for map[app:redis]
Feb  4 14:37:44.779: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
STEP: checking for a matching strings
Feb  4 14:37:44.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-nzftd redis-master --namespace=kubectl-9625'
Feb  4 14:37:44.994: INFO: stderr: ""
Feb  4 14:37:44.994: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 04 Feb 14:37:43.736 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 04 Feb 14:37:43.736 # Server started, Redis version 3.2.12\n1:M 04 Feb 14:37:43.736 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 04 Feb 14:37:43.736 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log lines
Feb  4 14:37:44.995: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-nzftd redis-master --namespace=kubectl-9625 --tail=1'
Feb  4 14:37:45.138: INFO: stderr: ""
Feb  4 14:37:45.139: INFO: stdout: "1:M 04 Feb 14:37:43.736 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log bytes
Feb  4 14:37:45.139: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-nzftd redis-master --namespace=kubectl-9625 --limit-bytes=1'
Feb  4 14:37:45.360: INFO: stderr: ""
Feb  4 14:37:45.360: INFO: stdout: " "
STEP: exposing timestamps
Feb  4 14:37:45.361: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-nzftd redis-master --namespace=kubectl-9625 --tail=1 --timestamps'
Feb  4 14:37:45.474: INFO: stderr: ""
Feb  4 14:37:45.474: INFO: stdout: "2020-02-04T14:37:43.737360314Z 1:M 04 Feb 14:37:43.736 * The server is now ready to accept connections on port 6379\n"
STEP: restricting to a time range
Feb  4 14:37:47.974: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-nzftd redis-master --namespace=kubectl-9625 --since=1s'
Feb  4 14:37:48.168: INFO: stderr: ""
Feb  4 14:37:48.168: INFO: stdout: ""
Feb  4 14:37:48.169: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-nzftd redis-master --namespace=kubectl-9625 --since=24h'
Feb  4 14:37:48.357: INFO: stderr: ""
Feb  4 14:37:48.358: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 04 Feb 14:37:43.736 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 04 Feb 14:37:43.736 # Server started, Redis version 3.2.12\n1:M 04 Feb 14:37:43.736 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 04 Feb 14:37:43.736 * The server is now ready to accept connections on port 6379\n"
[AfterEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298
STEP: using delete to clean up resources
Feb  4 14:37:48.359: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9625'
Feb  4 14:37:48.455: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  4 14:37:48.456: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n"
Feb  4 14:37:48.456: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=kubectl-9625'
Feb  4 14:37:48.560: INFO: stderr: "No resources found.\n"
Feb  4 14:37:48.560: INFO: stdout: ""
Feb  4 14:37:48.561: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=kubectl-9625 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb  4 14:37:48.698: INFO: stderr: ""
Feb  4 14:37:48.698: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 14:37:48.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9625" for this suite.
Feb  4 14:38:10.734: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 14:38:10.840: INFO: namespace kubectl-9625 deletion completed in 22.129509102s

• [SLOW TEST:34.567 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be able to retrieve and filter logs  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run job 
  should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  4 14:38:10.840: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1612
[It] should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb  4 14:38:10.901: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-9236'
Feb  4 14:38:11.070: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb  4 14:38:11.070: INFO: stdout: "job.batch/e2e-test-nginx-job created\n"
STEP: verifying the job e2e-test-nginx-job was created
[AfterEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1617
Feb  4 14:38:11.181: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=kubectl-9236'
Feb  4 14:38:11.364: INFO: stderr: ""
Feb  4 14:38:11.365: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 14:38:11.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9236" for this suite.
Feb  4 14:38:17.400: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 14:38:17.536: INFO: namespace kubectl-9236 deletion completed in 6.158646743s

• [SLOW TEST:6.696 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a job from an image when restart is OnFailure  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  4 14:38:17.538: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Feb  4 14:38:17.701: INFO: Number of nodes with available pods: 0
Feb  4 14:38:17.701: INFO: Node iruya-node is running more than one daemon pod
Feb  4 14:38:19.532: INFO: Number of nodes with available pods: 0
Feb  4 14:38:19.533: INFO: Node iruya-node is running more than one daemon pod
Feb  4 14:38:20.118: INFO: Number of nodes with available pods: 0
Feb  4 14:38:20.118: INFO: Node iruya-node is running more than one daemon pod
Feb  4 14:38:21.240: INFO: Number of nodes with available pods: 0
Feb  4 14:38:21.240: INFO: Node iruya-node is running more than one daemon pod
Feb  4 14:38:21.734: INFO: Number of nodes with available pods: 0
Feb  4 14:38:21.734: INFO: Node iruya-node is running more than one daemon pod
Feb  4 14:38:22.719: INFO: Number of nodes with available pods: 0
Feb  4 14:38:22.719: INFO: Node iruya-node is running more than one daemon pod
Feb  4 14:38:25.371: INFO: Number of nodes with available pods: 0
Feb  4 14:38:25.371: INFO: Node iruya-node is running more than one daemon pod
Feb  4 14:38:25.745: INFO: Number of nodes with available pods: 0
Feb  4 14:38:25.745: INFO: Node iruya-node is running more than one daemon pod
Feb  4 14:38:26.905: INFO: Number of nodes with available pods: 0
Feb  4 14:38:26.905: INFO: Node iruya-node is running more than one daemon pod
Feb  4 14:38:27.720: INFO: Number of nodes with available pods: 0
Feb  4 14:38:27.720: INFO: Node iruya-node is running more than one daemon pod
Feb  4 14:38:28.719: INFO: Number of nodes with available pods: 2
Feb  4 14:38:28.719: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Feb  4 14:38:28.825: INFO: Number of nodes with available pods: 1
Feb  4 14:38:28.825: INFO: Node iruya-node is running more than one daemon pod
Feb  4 14:38:29.875: INFO: Number of nodes with available pods: 1
Feb  4 14:38:29.875: INFO: Node iruya-node is running more than one daemon pod
Feb  4 14:38:30.847: INFO: Number of nodes with available pods: 1
Feb  4 14:38:30.847: INFO: Node iruya-node is running more than one daemon pod
Feb  4 14:38:31.847: INFO: Number of nodes with available pods: 1
Feb  4 14:38:31.847: INFO: Node iruya-node is running more than one daemon pod
Feb  4 14:38:32.858: INFO: Number of nodes with available pods: 1
Feb  4 14:38:32.858: INFO: Node iruya-node is running more than one daemon pod
Feb  4 14:38:33.843: INFO: Number of nodes with available pods: 1
Feb  4 14:38:33.844: INFO: Node iruya-node is running more than one daemon pod
Feb  4 14:38:34.841: INFO: Number of nodes with available pods: 1
Feb  4 14:38:34.841: INFO: Node iruya-node is running more than one daemon pod
Feb  4 14:38:35.848: INFO: Number of nodes with available pods: 1
Feb  4 14:38:35.849: INFO: Node iruya-node is running more than one daemon pod
Feb  4 14:38:36.877: INFO: Number of nodes with available pods: 1
Feb  4 14:38:36.877: INFO: Node iruya-node is running more than one daemon pod
Feb  4 14:38:37.843: INFO: Number of nodes with available pods: 2
Feb  4 14:38:37.843: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-541, will wait for the garbage collector to delete the pods
Feb  4 14:38:37.961: INFO: Deleting DaemonSet.extensions daemon-set took: 38.05484ms
Feb  4 14:38:38.262: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.713877ms
Feb  4 14:38:56.671: INFO: Number of nodes with available pods: 0
Feb  4 14:38:56.671: INFO: Number of running nodes: 0, number of available pods: 0
Feb  4 14:38:56.675: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-541/daemonsets","resourceVersion":"23080136"},"items":null}

Feb  4 14:38:56.677: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-541/pods","resourceVersion":"23080136"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 14:38:56.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-541" for this suite.
Feb  4 14:39:02.726: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 14:39:02.872: INFO: namespace daemonsets-541 deletion completed in 6.175630419s

• [SLOW TEST:45.335 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  4 14:39:02.873: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-upd-ecbc1201-f10b-4000-929a-16581a612b68
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 14:39:15.133: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7895" for this suite.
Feb  4 14:39:39.160: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 14:39:39.317: INFO: namespace configmap-7895 deletion completed in 24.178350021s

• [SLOW TEST:36.445 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  4 14:39:39.318: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-be8cdafc-b7b1-498d-ac8d-30e2f06783f0
STEP: Creating a pod to test consume configMaps
Feb  4 14:39:39.484: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-77fea12f-dc21-41cc-9b01-3f93f7f19862" in namespace "projected-360" to be "success or failure"
Feb  4 14:39:39.491: INFO: Pod "pod-projected-configmaps-77fea12f-dc21-41cc-9b01-3f93f7f19862": Phase="Pending", Reason="", readiness=false. Elapsed: 7.217991ms
Feb  4 14:39:41.529: INFO: Pod "pod-projected-configmaps-77fea12f-dc21-41cc-9b01-3f93f7f19862": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045702985s
Feb  4 14:39:43.538: INFO: Pod "pod-projected-configmaps-77fea12f-dc21-41cc-9b01-3f93f7f19862": Phase="Pending", Reason="", readiness=false. Elapsed: 4.054339659s
Feb  4 14:39:45.548: INFO: Pod "pod-projected-configmaps-77fea12f-dc21-41cc-9b01-3f93f7f19862": Phase="Pending", Reason="", readiness=false. Elapsed: 6.064713362s
Feb  4 14:39:47.556: INFO: Pod "pod-projected-configmaps-77fea12f-dc21-41cc-9b01-3f93f7f19862": Phase="Pending", Reason="", readiness=false. Elapsed: 8.072696636s
Feb  4 14:39:49.565: INFO: Pod "pod-projected-configmaps-77fea12f-dc21-41cc-9b01-3f93f7f19862": Phase="Pending", Reason="", readiness=false. Elapsed: 10.0810377s
Feb  4 14:39:51.572: INFO: Pod "pod-projected-configmaps-77fea12f-dc21-41cc-9b01-3f93f7f19862": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.088524063s
STEP: Saw pod success
Feb  4 14:39:51.572: INFO: Pod "pod-projected-configmaps-77fea12f-dc21-41cc-9b01-3f93f7f19862" satisfied condition "success or failure"
Feb  4 14:39:51.577: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-77fea12f-dc21-41cc-9b01-3f93f7f19862 container projected-configmap-volume-test: 
STEP: delete the pod
Feb  4 14:39:51.695: INFO: Waiting for pod pod-projected-configmaps-77fea12f-dc21-41cc-9b01-3f93f7f19862 to disappear
Feb  4 14:39:51.703: INFO: Pod pod-projected-configmaps-77fea12f-dc21-41cc-9b01-3f93f7f19862 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 14:39:51.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-360" for this suite.
Feb  4 14:39:57.739: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 14:39:57.900: INFO: namespace projected-360 deletion completed in 6.190415888s

• [SLOW TEST:18.582 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  4 14:39:57.901: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W0204 14:40:13.383676       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb  4 14:40:13.383: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 14:40:13.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-8503" for this suite.
Feb  4 14:40:25.279: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 14:40:25.881: INFO: namespace gc-8503 deletion completed in 11.269404935s

• [SLOW TEST:27.981 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  4 14:40:25.881: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override all
Feb  4 14:40:25.996: INFO: Waiting up to 5m0s for pod "client-containers-539e5db1-e119-40ea-babe-a61c3f09ea95" in namespace "containers-7022" to be "success or failure"
Feb  4 14:40:26.046: INFO: Pod "client-containers-539e5db1-e119-40ea-babe-a61c3f09ea95": Phase="Pending", Reason="", readiness=false. Elapsed: 50.423765ms
Feb  4 14:40:28.056: INFO: Pod "client-containers-539e5db1-e119-40ea-babe-a61c3f09ea95": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060727482s
Feb  4 14:40:30.071: INFO: Pod "client-containers-539e5db1-e119-40ea-babe-a61c3f09ea95": Phase="Pending", Reason="", readiness=false. Elapsed: 4.075461152s
Feb  4 14:40:32.079: INFO: Pod "client-containers-539e5db1-e119-40ea-babe-a61c3f09ea95": Phase="Pending", Reason="", readiness=false. Elapsed: 6.083537645s
Feb  4 14:40:34.097: INFO: Pod "client-containers-539e5db1-e119-40ea-babe-a61c3f09ea95": Phase="Pending", Reason="", readiness=false. Elapsed: 8.1015819s
Feb  4 14:40:36.110: INFO: Pod "client-containers-539e5db1-e119-40ea-babe-a61c3f09ea95": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.113979471s
STEP: Saw pod success
Feb  4 14:40:36.110: INFO: Pod "client-containers-539e5db1-e119-40ea-babe-a61c3f09ea95" satisfied condition "success or failure"
Feb  4 14:40:36.115: INFO: Trying to get logs from node iruya-node pod client-containers-539e5db1-e119-40ea-babe-a61c3f09ea95 container test-container: 
STEP: delete the pod
Feb  4 14:40:36.184: INFO: Waiting for pod client-containers-539e5db1-e119-40ea-babe-a61c3f09ea95 to disappear
Feb  4 14:40:36.223: INFO: Pod client-containers-539e5db1-e119-40ea-babe-a61c3f09ea95 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 14:40:36.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-7022" for this suite.
Feb  4 14:40:42.280: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 14:40:42.365: INFO: namespace containers-7022 deletion completed in 6.134264565s

• [SLOW TEST:16.484 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  4 14:40:42.366: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-8f303ed0-7af3-440e-abb0-10584de0bfb2
STEP: Creating a pod to test consume configMaps
Feb  4 14:40:42.440: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-6fa67946-2d2a-4224-a257-b0a5f247bd25" in namespace "projected-4400" to be "success or failure"
Feb  4 14:40:42.450: INFO: Pod "pod-projected-configmaps-6fa67946-2d2a-4224-a257-b0a5f247bd25": Phase="Pending", Reason="", readiness=false. Elapsed: 10.146799ms
Feb  4 14:40:44.464: INFO: Pod "pod-projected-configmaps-6fa67946-2d2a-4224-a257-b0a5f247bd25": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024172006s
Feb  4 14:40:46.482: INFO: Pod "pod-projected-configmaps-6fa67946-2d2a-4224-a257-b0a5f247bd25": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041683918s
Feb  4 14:40:48.511: INFO: Pod "pod-projected-configmaps-6fa67946-2d2a-4224-a257-b0a5f247bd25": Phase="Pending", Reason="", readiness=false. Elapsed: 6.070803435s
Feb  4 14:40:50.526: INFO: Pod "pod-projected-configmaps-6fa67946-2d2a-4224-a257-b0a5f247bd25": Phase="Pending", Reason="", readiness=false. Elapsed: 8.085575446s
Feb  4 14:40:52.717: INFO: Pod "pod-projected-configmaps-6fa67946-2d2a-4224-a257-b0a5f247bd25": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.27650331s
STEP: Saw pod success
Feb  4 14:40:52.717: INFO: Pod "pod-projected-configmaps-6fa67946-2d2a-4224-a257-b0a5f247bd25" satisfied condition "success or failure"
Feb  4 14:40:52.728: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-6fa67946-2d2a-4224-a257-b0a5f247bd25 container projected-configmap-volume-test: 
STEP: delete the pod
Feb  4 14:40:53.277: INFO: Waiting for pod pod-projected-configmaps-6fa67946-2d2a-4224-a257-b0a5f247bd25 to disappear
Feb  4 14:40:53.924: INFO: Pod pod-projected-configmaps-6fa67946-2d2a-4224-a257-b0a5f247bd25 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 14:40:53.925: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4400" for this suite.
Feb  4 14:41:00.014: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 14:41:00.184: INFO: namespace projected-4400 deletion completed in 6.246464754s

• [SLOW TEST:17.819 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  4 14:41:00.185: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-9fd30fb8-5c70-45ed-a115-6dbaba5099d7
STEP: Creating a pod to test consume secrets
Feb  4 14:41:00.816: INFO: Waiting up to 5m0s for pod "pod-secrets-668abe15-5f91-4e24-ae4f-79a61ebd4cbd" in namespace "secrets-2878" to be "success or failure"
Feb  4 14:41:00.863: INFO: Pod "pod-secrets-668abe15-5f91-4e24-ae4f-79a61ebd4cbd": Phase="Pending", Reason="", readiness=false. Elapsed: 46.95207ms
Feb  4 14:41:02.911: INFO: Pod "pod-secrets-668abe15-5f91-4e24-ae4f-79a61ebd4cbd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.09496916s
Feb  4 14:41:04.924: INFO: Pod "pod-secrets-668abe15-5f91-4e24-ae4f-79a61ebd4cbd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.107882865s
Feb  4 14:41:06.934: INFO: Pod "pod-secrets-668abe15-5f91-4e24-ae4f-79a61ebd4cbd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.117955489s
Feb  4 14:41:08.940: INFO: Pod "pod-secrets-668abe15-5f91-4e24-ae4f-79a61ebd4cbd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.12366807s
Feb  4 14:41:10.950: INFO: Pod "pod-secrets-668abe15-5f91-4e24-ae4f-79a61ebd4cbd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.133888307s
STEP: Saw pod success
Feb  4 14:41:10.950: INFO: Pod "pod-secrets-668abe15-5f91-4e24-ae4f-79a61ebd4cbd" satisfied condition "success or failure"
Feb  4 14:41:10.971: INFO: Trying to get logs from node iruya-node pod pod-secrets-668abe15-5f91-4e24-ae4f-79a61ebd4cbd container secret-volume-test: 
STEP: delete the pod
Feb  4 14:41:11.028: INFO: Waiting for pod pod-secrets-668abe15-5f91-4e24-ae4f-79a61ebd4cbd to disappear
Feb  4 14:41:11.033: INFO: Pod pod-secrets-668abe15-5f91-4e24-ae4f-79a61ebd4cbd no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 14:41:11.033: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2878" for this suite.
Feb  4 14:41:17.119: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 14:41:17.227: INFO: namespace secrets-2878 deletion completed in 6.188912609s

• [SLOW TEST:17.043 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  4 14:41:17.228: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Feb  4 14:41:17.382: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 14:41:30.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-953" for this suite.
Feb  4 14:41:36.950: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 14:41:37.057: INFO: namespace init-container-953 deletion completed in 6.176899227s

• [SLOW TEST:19.829 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  4 14:41:37.059: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Feb  4 14:41:57.348: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  4 14:41:57.381: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  4 14:41:59.382: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  4 14:41:59.393: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  4 14:42:01.382: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  4 14:42:01.391: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  4 14:42:03.382: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  4 14:42:03.394: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  4 14:42:05.382: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  4 14:42:05.390: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  4 14:42:07.382: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  4 14:42:07.391: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  4 14:42:09.382: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  4 14:42:09.393: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  4 14:42:11.382: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  4 14:42:11.394: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  4 14:42:13.382: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  4 14:42:13.396: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  4 14:42:15.382: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  4 14:42:15.391: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  4 14:42:17.382: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  4 14:42:17.391: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  4 14:42:19.382: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  4 14:42:19.393: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 14:42:19.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-7880" for this suite.
Feb  4 14:42:41.483: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 14:42:41.648: INFO: namespace container-lifecycle-hook-7880 deletion completed in 22.220127256s

• [SLOW TEST:64.589 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  4 14:42:41.649: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-map-a15ab396-8d69-4e52-9e67-299df68f0e65
STEP: Creating a pod to test consume secrets
Feb  4 14:42:41.804: INFO: Waiting up to 5m0s for pod "pod-secrets-c9ce6444-2acc-489a-b46a-0f01fdd6fabf" in namespace "secrets-6759" to be "success or failure"
Feb  4 14:42:41.844: INFO: Pod "pod-secrets-c9ce6444-2acc-489a-b46a-0f01fdd6fabf": Phase="Pending", Reason="", readiness=false. Elapsed: 40.398086ms
Feb  4 14:42:43.869: INFO: Pod "pod-secrets-c9ce6444-2acc-489a-b46a-0f01fdd6fabf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064886109s
Feb  4 14:42:45.879: INFO: Pod "pod-secrets-c9ce6444-2acc-489a-b46a-0f01fdd6fabf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.075157258s
Feb  4 14:42:47.888: INFO: Pod "pod-secrets-c9ce6444-2acc-489a-b46a-0f01fdd6fabf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.084227334s
Feb  4 14:42:49.897: INFO: Pod "pod-secrets-c9ce6444-2acc-489a-b46a-0f01fdd6fabf": Phase="Pending", Reason="", readiness=false. Elapsed: 8.092848274s
Feb  4 14:42:51.906: INFO: Pod "pod-secrets-c9ce6444-2acc-489a-b46a-0f01fdd6fabf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.102216503s
STEP: Saw pod success
Feb  4 14:42:51.906: INFO: Pod "pod-secrets-c9ce6444-2acc-489a-b46a-0f01fdd6fabf" satisfied condition "success or failure"
Feb  4 14:42:51.909: INFO: Trying to get logs from node iruya-node pod pod-secrets-c9ce6444-2acc-489a-b46a-0f01fdd6fabf container secret-volume-test: 
STEP: delete the pod
Feb  4 14:42:51.981: INFO: Waiting for pod pod-secrets-c9ce6444-2acc-489a-b46a-0f01fdd6fabf to disappear
Feb  4 14:42:51.989: INFO: Pod pod-secrets-c9ce6444-2acc-489a-b46a-0f01fdd6fabf no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 14:42:51.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-6759" for this suite.
Feb  4 14:42:58.048: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 14:42:58.161: INFO: namespace secrets-6759 deletion completed in 6.164903201s

• [SLOW TEST:16.512 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  4 14:42:58.161: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir volume type on node default medium
Feb  4 14:42:58.295: INFO: Waiting up to 5m0s for pod "pod-3124c175-c2d5-421b-9cc3-533dbc4a336b" in namespace "emptydir-8475" to be "success or failure"
Feb  4 14:42:58.300: INFO: Pod "pod-3124c175-c2d5-421b-9cc3-533dbc4a336b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.925539ms
Feb  4 14:43:00.311: INFO: Pod "pod-3124c175-c2d5-421b-9cc3-533dbc4a336b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015522846s
Feb  4 14:43:02.318: INFO: Pod "pod-3124c175-c2d5-421b-9cc3-533dbc4a336b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022776807s
Feb  4 14:43:04.326: INFO: Pod "pod-3124c175-c2d5-421b-9cc3-533dbc4a336b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.030329014s
Feb  4 14:43:06.454: INFO: Pod "pod-3124c175-c2d5-421b-9cc3-533dbc4a336b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.158811208s
STEP: Saw pod success
Feb  4 14:43:06.454: INFO: Pod "pod-3124c175-c2d5-421b-9cc3-533dbc4a336b" satisfied condition "success or failure"
Feb  4 14:43:06.466: INFO: Trying to get logs from node iruya-node pod pod-3124c175-c2d5-421b-9cc3-533dbc4a336b container test-container: 
STEP: delete the pod
Feb  4 14:43:06.520: INFO: Waiting for pod pod-3124c175-c2d5-421b-9cc3-533dbc4a336b to disappear
Feb  4 14:43:06.525: INFO: Pod pod-3124c175-c2d5-421b-9cc3-533dbc4a336b no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 14:43:06.525: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8475" for this suite.
Feb  4 14:43:12.566: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 14:43:12.655: INFO: namespace emptydir-8475 deletion completed in 6.123857634s

• [SLOW TEST:14.494 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  4 14:43:12.655: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-b0e21160-ba51-4e18-860b-47cf3beb1486
STEP: Creating a pod to test consume configMaps
Feb  4 14:43:12.747: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-4cd298fa-d0b0-4da7-8f39-5bce86021cb5" in namespace "projected-8728" to be "success or failure"
Feb  4 14:43:12.754: INFO: Pod "pod-projected-configmaps-4cd298fa-d0b0-4da7-8f39-5bce86021cb5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.669004ms
Feb  4 14:43:14.763: INFO: Pod "pod-projected-configmaps-4cd298fa-d0b0-4da7-8f39-5bce86021cb5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016237171s
Feb  4 14:43:16.769: INFO: Pod "pod-projected-configmaps-4cd298fa-d0b0-4da7-8f39-5bce86021cb5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.021796602s
Feb  4 14:43:18.781: INFO: Pod "pod-projected-configmaps-4cd298fa-d0b0-4da7-8f39-5bce86021cb5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.033794695s
Feb  4 14:43:20.804: INFO: Pod "pod-projected-configmaps-4cd298fa-d0b0-4da7-8f39-5bce86021cb5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.056783003s
STEP: Saw pod success
Feb  4 14:43:20.804: INFO: Pod "pod-projected-configmaps-4cd298fa-d0b0-4da7-8f39-5bce86021cb5" satisfied condition "success or failure"
Feb  4 14:43:20.817: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-4cd298fa-d0b0-4da7-8f39-5bce86021cb5 container projected-configmap-volume-test: 
STEP: delete the pod
Feb  4 14:43:20.920: INFO: Waiting for pod pod-projected-configmaps-4cd298fa-d0b0-4da7-8f39-5bce86021cb5 to disappear
Feb  4 14:43:20.934: INFO: Pod pod-projected-configmaps-4cd298fa-d0b0-4da7-8f39-5bce86021cb5 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 14:43:20.934: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8728" for this suite.
Feb  4 14:43:27.003: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 14:43:27.124: INFO: namespace projected-8728 deletion completed in 6.173822243s

• [SLOW TEST:14.469 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  4 14:43:27.124: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:179
[It] should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 14:43:27.217: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-7718" for this suite.
Feb  4 14:43:49.338: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 14:43:49.463: INFO: namespace pods-7718 deletion completed in 22.237240307s

• [SLOW TEST:22.338 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be submitted and removed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  4 14:43:49.463: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-714
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb  4 14:43:49.529: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Feb  4 14:44:27.917: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=http&host=10.44.0.1&port=8080&tries=1'] Namespace:pod-network-test-714 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  4 14:44:27.917: INFO: >>> kubeConfig: /root/.kube/config
I0204 14:44:27.983446       8 log.go:172] (0xc001c3f550) (0xc002dfbe00) Create stream
I0204 14:44:27.983508       8 log.go:172] (0xc001c3f550) (0xc002dfbe00) Stream added, broadcasting: 1
I0204 14:44:27.991216       8 log.go:172] (0xc001c3f550) Reply frame received for 1
I0204 14:44:27.991268       8 log.go:172] (0xc001c3f550) (0xc0020ca0a0) Create stream
I0204 14:44:27.991298       8 log.go:172] (0xc001c3f550) (0xc0020ca0a0) Stream added, broadcasting: 3
I0204 14:44:27.993918       8 log.go:172] (0xc001c3f550) Reply frame received for 3
I0204 14:44:27.993938       8 log.go:172] (0xc001c3f550) (0xc0020ca140) Create stream
I0204 14:44:27.993947       8 log.go:172] (0xc001c3f550) (0xc0020ca140) Stream added, broadcasting: 5
I0204 14:44:27.996121       8 log.go:172] (0xc001c3f550) Reply frame received for 5
I0204 14:44:28.161021       8 log.go:172] (0xc001c3f550) Data frame received for 3
I0204 14:44:28.161052       8 log.go:172] (0xc0020ca0a0) (3) Data frame handling
I0204 14:44:28.161063       8 log.go:172] (0xc0020ca0a0) (3) Data frame sent
I0204 14:44:28.321469       8 log.go:172] (0xc001c3f550) Data frame received for 1
I0204 14:44:28.321559       8 log.go:172] (0xc001c3f550) (0xc0020ca0a0) Stream removed, broadcasting: 3
I0204 14:44:28.321642       8 log.go:172] (0xc002dfbe00) (1) Data frame handling
I0204 14:44:28.321685       8 log.go:172] (0xc002dfbe00) (1) Data frame sent
I0204 14:44:28.321701       8 log.go:172] (0xc001c3f550) (0xc002dfbe00) Stream removed, broadcasting: 1
I0204 14:44:28.322050       8 log.go:172] (0xc001c3f550) (0xc0020ca140) Stream removed, broadcasting: 5
I0204 14:44:28.322128       8 log.go:172] (0xc001c3f550) (0xc002dfbe00) Stream removed, broadcasting: 1
I0204 14:44:28.322152       8 log.go:172] (0xc001c3f550) (0xc0020ca0a0) Stream removed, broadcasting: 3
I0204 14:44:28.322167       8 log.go:172] (0xc001c3f550) (0xc0020ca140) Stream removed, broadcasting: 5
I0204 14:44:28.322228       8 log.go:172] (0xc001c3f550) Go away received
Feb  4 14:44:28.322: INFO: Waiting for endpoints: map[]
Feb  4 14:44:28.331: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:pod-network-test-714 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  4 14:44:28.332: INFO: >>> kubeConfig: /root/.kube/config
I0204 14:44:28.425812       8 log.go:172] (0xc002740160) (0xc001b6c140) Create stream
I0204 14:44:28.425864       8 log.go:172] (0xc002740160) (0xc001b6c140) Stream added, broadcasting: 1
I0204 14:44:28.433989       8 log.go:172] (0xc002740160) Reply frame received for 1
I0204 14:44:28.434047       8 log.go:172] (0xc002740160) (0xc0020ca460) Create stream
I0204 14:44:28.434059       8 log.go:172] (0xc002740160) (0xc0020ca460) Stream added, broadcasting: 3
I0204 14:44:28.440157       8 log.go:172] (0xc002740160) Reply frame received for 3
I0204 14:44:28.440191       8 log.go:172] (0xc002740160) (0xc0020ca500) Create stream
I0204 14:44:28.440205       8 log.go:172] (0xc002740160) (0xc0020ca500) Stream added, broadcasting: 5
I0204 14:44:28.443168       8 log.go:172] (0xc002740160) Reply frame received for 5
I0204 14:44:28.687839       8 log.go:172] (0xc002740160) Data frame received for 3
I0204 14:44:28.688003       8 log.go:172] (0xc0020ca460) (3) Data frame handling
I0204 14:44:28.688025       8 log.go:172] (0xc0020ca460) (3) Data frame sent
I0204 14:44:28.881898       8 log.go:172] (0xc002740160) (0xc0020ca460) Stream removed, broadcasting: 3
I0204 14:44:28.882000       8 log.go:172] (0xc002740160) Data frame received for 1
I0204 14:44:28.882025       8 log.go:172] (0xc001b6c140) (1) Data frame handling
I0204 14:44:28.882051       8 log.go:172] (0xc001b6c140) (1) Data frame sent
I0204 14:44:28.882066       8 log.go:172] (0xc002740160) (0xc001b6c140) Stream removed, broadcasting: 1
I0204 14:44:28.882179       8 log.go:172] (0xc002740160) (0xc0020ca500) Stream removed, broadcasting: 5
I0204 14:44:28.882297       8 log.go:172] (0xc002740160) (0xc001b6c140) Stream removed, broadcasting: 1
I0204 14:44:28.882350       8 log.go:172] (0xc002740160) (0xc0020ca460) Stream removed, broadcasting: 3
I0204 14:44:28.882400       8 log.go:172] (0xc002740160) (0xc0020ca500) Stream removed, broadcasting: 5
I0204 14:44:28.882734       8 log.go:172] (0xc002740160) Go away received
Feb  4 14:44:28.882: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 14:44:28.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-714" for this suite.
Feb  4 14:44:52.920: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 14:44:53.049: INFO: namespace pod-network-test-714 deletion completed in 24.155517621s

• [SLOW TEST:63.586 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  4 14:44:53.049: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-470
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a new StatefulSet
Feb  4 14:44:53.336: INFO: Found 0 stateful pods, waiting for 3
Feb  4 14:45:03.350: INFO: Found 2 stateful pods, waiting for 3
Feb  4 14:45:13.354: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb  4 14:45:13.354: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb  4 14:45:13.354: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb  4 14:45:23.347: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb  4 14:45:23.347: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb  4 14:45:23.347: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Feb  4 14:45:23.365: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-470 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb  4 14:45:25.748: INFO: stderr: "I0204 14:45:25.507259    3194 log.go:172] (0xc0005ea790) (0xc000392a00) Create stream\nI0204 14:45:25.507363    3194 log.go:172] (0xc0005ea790) (0xc000392a00) Stream added, broadcasting: 1\nI0204 14:45:25.528676    3194 log.go:172] (0xc0005ea790) Reply frame received for 1\nI0204 14:45:25.528749    3194 log.go:172] (0xc0005ea790) (0xc000229b80) Create stream\nI0204 14:45:25.528763    3194 log.go:172] (0xc0005ea790) (0xc000229b80) Stream added, broadcasting: 3\nI0204 14:45:25.531027    3194 log.go:172] (0xc0005ea790) Reply frame received for 3\nI0204 14:45:25.531069    3194 log.go:172] (0xc0005ea790) (0xc00064a320) Create stream\nI0204 14:45:25.531082    3194 log.go:172] (0xc0005ea790) (0xc00064a320) Stream added, broadcasting: 5\nI0204 14:45:25.533694    3194 log.go:172] (0xc0005ea790) Reply frame received for 5\nI0204 14:45:25.641730    3194 log.go:172] (0xc0005ea790) Data frame received for 5\nI0204 14:45:25.641805    3194 log.go:172] (0xc00064a320) (5) Data frame handling\nI0204 14:45:25.641825    3194 log.go:172] (0xc00064a320) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0204 14:45:25.663000    3194 log.go:172] (0xc0005ea790) Data frame received for 3\nI0204 14:45:25.663066    3194 log.go:172] (0xc000229b80) (3) Data frame handling\nI0204 14:45:25.663093    3194 log.go:172] (0xc000229b80) (3) Data frame sent\nI0204 14:45:25.739582    3194 log.go:172] (0xc0005ea790) (0xc000229b80) Stream removed, broadcasting: 3\nI0204 14:45:25.739751    3194 log.go:172] (0xc0005ea790) Data frame received for 1\nI0204 14:45:25.739783    3194 log.go:172] (0xc000392a00) (1) Data frame handling\nI0204 14:45:25.739799    3194 log.go:172] (0xc000392a00) (1) Data frame sent\nI0204 14:45:25.739816    3194 log.go:172] (0xc0005ea790) (0xc000392a00) Stream removed, broadcasting: 1\nI0204 14:45:25.739891    3194 log.go:172] (0xc0005ea790) (0xc00064a320) Stream removed, broadcasting: 5\nI0204 14:45:25.740000    3194 log.go:172] (0xc0005ea790) Go away received\nI0204 14:45:25.740258    3194 log.go:172] (0xc0005ea790) (0xc000392a00) Stream removed, broadcasting: 1\nI0204 14:45:25.740281    3194 log.go:172] (0xc0005ea790) (0xc000229b80) Stream removed, broadcasting: 3\nI0204 14:45:25.740292    3194 log.go:172] (0xc0005ea790) (0xc00064a320) Stream removed, broadcasting: 5\n"
Feb  4 14:45:25.748: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb  4 14:45:25.748: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Feb  4 14:45:35.908: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Feb  4 14:45:45.935: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-470 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  4 14:45:46.309: INFO: stderr: "I0204 14:45:46.103213    3222 log.go:172] (0xc0009a0420) (0xc0008fe820) Create stream\nI0204 14:45:46.103352    3222 log.go:172] (0xc0009a0420) (0xc0008fe820) Stream added, broadcasting: 1\nI0204 14:45:46.116138    3222 log.go:172] (0xc0009a0420) Reply frame received for 1\nI0204 14:45:46.116179    3222 log.go:172] (0xc0009a0420) (0xc0008fe000) Create stream\nI0204 14:45:46.116190    3222 log.go:172] (0xc0009a0420) (0xc0008fe000) Stream added, broadcasting: 3\nI0204 14:45:46.117319    3222 log.go:172] (0xc0009a0420) Reply frame received for 3\nI0204 14:45:46.117383    3222 log.go:172] (0xc0009a0420) (0xc000870000) Create stream\nI0204 14:45:46.117399    3222 log.go:172] (0xc0009a0420) (0xc000870000) Stream added, broadcasting: 5\nI0204 14:45:46.118756    3222 log.go:172] (0xc0009a0420) Reply frame received for 5\nI0204 14:45:46.213183    3222 log.go:172] (0xc0009a0420) Data frame received for 3\nI0204 14:45:46.213307    3222 log.go:172] (0xc0008fe000) (3) Data frame handling\nI0204 14:45:46.213414    3222 log.go:172] (0xc0008fe000) (3) Data frame sent\nI0204 14:45:46.214258    3222 log.go:172] (0xc0009a0420) Data frame received for 5\nI0204 14:45:46.214298    3222 log.go:172] (0xc000870000) (5) Data frame handling\nI0204 14:45:46.214321    3222 log.go:172] (0xc000870000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0204 14:45:46.298790    3222 log.go:172] (0xc0009a0420) Data frame received for 1\nI0204 14:45:46.299312    3222 log.go:172] (0xc0009a0420) (0xc0008fe000) Stream removed, broadcasting: 3\nI0204 14:45:46.299554    3222 log.go:172] (0xc0008fe820) (1) Data frame handling\nI0204 14:45:46.299596    3222 log.go:172] (0xc0008fe820) (1) Data frame sent\nI0204 14:45:46.299699    3222 log.go:172] (0xc0009a0420) (0xc000870000) Stream removed, broadcasting: 5\nI0204 14:45:46.299763    3222 log.go:172] (0xc0009a0420) (0xc0008fe820) Stream removed, broadcasting: 1\nI0204 14:45:46.299787    3222 log.go:172] (0xc0009a0420) Go away received\nI0204 14:45:46.300720    3222 log.go:172] (0xc0009a0420) (0xc0008fe820) Stream removed, broadcasting: 1\nI0204 14:45:46.300744    3222 log.go:172] (0xc0009a0420) (0xc0008fe000) Stream removed, broadcasting: 3\nI0204 14:45:46.300758    3222 log.go:172] (0xc0009a0420) (0xc000870000) Stream removed, broadcasting: 5\n"
Feb  4 14:45:46.309: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb  4 14:45:46.309: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb  4 14:45:56.409: INFO: Waiting for StatefulSet statefulset-470/ss2 to complete update
Feb  4 14:45:56.409: INFO: Waiting for Pod statefulset-470/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  4 14:45:56.409: INFO: Waiting for Pod statefulset-470/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  4 14:45:56.409: INFO: Waiting for Pod statefulset-470/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  4 14:46:06.795: INFO: Waiting for StatefulSet statefulset-470/ss2 to complete update
Feb  4 14:46:06.795: INFO: Waiting for Pod statefulset-470/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  4 14:46:06.795: INFO: Waiting for Pod statefulset-470/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  4 14:46:16.772: INFO: Waiting for StatefulSet statefulset-470/ss2 to complete update
Feb  4 14:46:16.773: INFO: Waiting for Pod statefulset-470/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  4 14:46:26.429: INFO: Waiting for StatefulSet statefulset-470/ss2 to complete update
STEP: Rolling back to a previous revision
Feb  4 14:46:36.437: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-470 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb  4 14:46:36.904: INFO: stderr: "I0204 14:46:36.626098    3241 log.go:172] (0xc0008ea0b0) (0xc0007540a0) Create stream\nI0204 14:46:36.626296    3241 log.go:172] (0xc0008ea0b0) (0xc0007540a0) Stream added, broadcasting: 1\nI0204 14:46:36.634953    3241 log.go:172] (0xc0008ea0b0) Reply frame received for 1\nI0204 14:46:36.635006    3241 log.go:172] (0xc0008ea0b0) (0xc000900000) Create stream\nI0204 14:46:36.635020    3241 log.go:172] (0xc0008ea0b0) (0xc000900000) Stream added, broadcasting: 3\nI0204 14:46:36.636444    3241 log.go:172] (0xc0008ea0b0) Reply frame received for 3\nI0204 14:46:36.636473    3241 log.go:172] (0xc0008ea0b0) (0xc000536280) Create stream\nI0204 14:46:36.636483    3241 log.go:172] (0xc0008ea0b0) (0xc000536280) Stream added, broadcasting: 5\nI0204 14:46:36.637540    3241 log.go:172] (0xc0008ea0b0) Reply frame received for 5\nI0204 14:46:36.754391    3241 log.go:172] (0xc0008ea0b0) Data frame received for 5\nI0204 14:46:36.754423    3241 log.go:172] (0xc000536280) (5) Data frame handling\nI0204 14:46:36.754437    3241 log.go:172] (0xc000536280) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0204 14:46:36.800777    3241 log.go:172] (0xc0008ea0b0) Data frame received for 3\nI0204 14:46:36.800805    3241 log.go:172] (0xc000900000) (3) Data frame handling\nI0204 14:46:36.800816    3241 log.go:172] (0xc000900000) (3) Data frame sent\nI0204 14:46:36.897071    3241 log.go:172] (0xc0008ea0b0) Data frame received for 1\nI0204 14:46:36.897113    3241 log.go:172] (0xc0007540a0) (1) Data frame handling\nI0204 14:46:36.897130    3241 log.go:172] (0xc0007540a0) (1) Data frame sent\nI0204 14:46:36.897142    3241 log.go:172] (0xc0008ea0b0) (0xc0007540a0) Stream removed, broadcasting: 1\nI0204 14:46:36.897929    3241 log.go:172] (0xc0008ea0b0) (0xc000900000) Stream removed, broadcasting: 3\nI0204 14:46:36.897983    3241 log.go:172] (0xc0008ea0b0) (0xc000536280) Stream removed, broadcasting: 5\nI0204 14:46:36.898025    3241 log.go:172] (0xc0008ea0b0) (0xc0007540a0) Stream removed, broadcasting: 1\nI0204 14:46:36.898038    3241 log.go:172] (0xc0008ea0b0) (0xc000900000) Stream removed, broadcasting: 3\nI0204 14:46:36.898047    3241 log.go:172] (0xc0008ea0b0) (0xc000536280) Stream removed, broadcasting: 5\n"
Feb  4 14:46:36.904: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb  4 14:46:36.904: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb  4 14:46:46.973: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Feb  4 14:46:57.012: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-470 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  4 14:46:57.385: INFO: stderr: "I0204 14:46:57.189094    3255 log.go:172] (0xc000920210) (0xc0008866e0) Create stream\nI0204 14:46:57.189187    3255 log.go:172] (0xc000920210) (0xc0008866e0) Stream added, broadcasting: 1\nI0204 14:46:57.192778    3255 log.go:172] (0xc000920210) Reply frame received for 1\nI0204 14:46:57.192843    3255 log.go:172] (0xc000920210) (0xc00061a1e0) Create stream\nI0204 14:46:57.192858    3255 log.go:172] (0xc000920210) (0xc00061a1e0) Stream added, broadcasting: 3\nI0204 14:46:57.194116    3255 log.go:172] (0xc000920210) Reply frame received for 3\nI0204 14:46:57.194146    3255 log.go:172] (0xc000920210) (0xc000886780) Create stream\nI0204 14:46:57.194154    3255 log.go:172] (0xc000920210) (0xc000886780) Stream added, broadcasting: 5\nI0204 14:46:57.195199    3255 log.go:172] (0xc000920210) Reply frame received for 5\nI0204 14:46:57.259147    3255 log.go:172] (0xc000920210) Data frame received for 5\nI0204 14:46:57.259249    3255 log.go:172] (0xc000886780) (5) Data frame handling\nI0204 14:46:57.259267    3255 log.go:172] (0xc000886780) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0204 14:46:57.262841    3255 log.go:172] (0xc000920210) Data frame received for 3\nI0204 14:46:57.263162    3255 log.go:172] (0xc00061a1e0) (3) Data frame handling\nI0204 14:46:57.263244    3255 log.go:172] (0xc00061a1e0) (3) Data frame sent\nI0204 14:46:57.375990    3255 log.go:172] (0xc000920210) (0xc00061a1e0) Stream removed, broadcasting: 3\nI0204 14:46:57.376075    3255 log.go:172] (0xc000920210) Data frame received for 1\nI0204 14:46:57.376100    3255 log.go:172] (0xc000920210) (0xc000886780) Stream removed, broadcasting: 5\nI0204 14:46:57.376138    3255 log.go:172] (0xc0008866e0) (1) Data frame handling\nI0204 14:46:57.376154    3255 log.go:172] (0xc0008866e0) (1) Data frame sent\nI0204 14:46:57.376166    3255 log.go:172] (0xc000920210) (0xc0008866e0) Stream removed, broadcasting: 1\nI0204 14:46:57.376189    3255 log.go:172] (0xc000920210) Go away received\nI0204 14:46:57.377168    3255 log.go:172] (0xc000920210) (0xc0008866e0) Stream removed, broadcasting: 1\nI0204 14:46:57.377237    3255 log.go:172] (0xc000920210) (0xc00061a1e0) Stream removed, broadcasting: 3\nI0204 14:46:57.377264    3255 log.go:172] (0xc000920210) (0xc000886780) Stream removed, broadcasting: 5\n"
Feb  4 14:46:57.385: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb  4 14:46:57.385: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb  4 14:47:07.423: INFO: Waiting for StatefulSet statefulset-470/ss2 to complete update
Feb  4 14:47:07.423: INFO: Waiting for Pod statefulset-470/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb  4 14:47:07.423: INFO: Waiting for Pod statefulset-470/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb  4 14:47:17.444: INFO: Waiting for StatefulSet statefulset-470/ss2 to complete update
Feb  4 14:47:17.444: INFO: Waiting for Pod statefulset-470/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb  4 14:47:17.444: INFO: Waiting for Pod statefulset-470/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb  4 14:47:27.445: INFO: Waiting for StatefulSet statefulset-470/ss2 to complete update
Feb  4 14:47:27.445: INFO: Waiting for Pod statefulset-470/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb  4 14:47:27.445: INFO: Waiting for Pod statefulset-470/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb  4 14:47:37.441: INFO: Waiting for StatefulSet statefulset-470/ss2 to complete update
Feb  4 14:47:37.441: INFO: Waiting for Pod statefulset-470/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb  4 14:47:47.439: INFO: Waiting for StatefulSet statefulset-470/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Feb  4 14:47:57.443: INFO: Deleting all statefulset in ns statefulset-470
Feb  4 14:47:57.449: INFO: Scaling statefulset ss2 to 0
Feb  4 14:48:27.481: INFO: Waiting for statefulset status.replicas updated to 0
Feb  4 14:48:27.485: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 14:48:27.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-470" for this suite.
Feb  4 14:48:35.586: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 14:48:35.761: INFO: namespace statefulset-470 deletion completed in 8.237896316s

• [SLOW TEST:222.712 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should perform rolling updates and roll backs of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  4 14:48:35.761: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-5199
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb  4 14:48:35.903: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Feb  4 14:49:16.398: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:pod-network-test-5199 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  4 14:49:16.398: INFO: >>> kubeConfig: /root/.kube/config
I0204 14:49:16.514511       8 log.go:172] (0xc001310210) (0xc00112b7c0) Create stream
I0204 14:49:16.514696       8 log.go:172] (0xc001310210) (0xc00112b7c0) Stream added, broadcasting: 1
I0204 14:49:16.530750       8 log.go:172] (0xc001310210) Reply frame received for 1
I0204 14:49:16.530817       8 log.go:172] (0xc001310210) (0xc0015580a0) Create stream
I0204 14:49:16.530831       8 log.go:172] (0xc001310210) (0xc0015580a0) Stream added, broadcasting: 3
I0204 14:49:16.536633       8 log.go:172] (0xc001310210) Reply frame received for 3
I0204 14:49:16.536662       8 log.go:172] (0xc001310210) (0xc00112bd60) Create stream
I0204 14:49:16.536672       8 log.go:172] (0xc001310210) (0xc00112bd60) Stream added, broadcasting: 5
I0204 14:49:16.541923       8 log.go:172] (0xc001310210) Reply frame received for 5
I0204 14:49:17.850039       8 log.go:172] (0xc001310210) Data frame received for 3
I0204 14:49:17.850145       8 log.go:172] (0xc0015580a0) (3) Data frame handling
I0204 14:49:17.850170       8 log.go:172] (0xc0015580a0) (3) Data frame sent
I0204 14:49:18.070608       8 log.go:172] (0xc001310210) Data frame received for 1
I0204 14:49:18.070726       8 log.go:172] (0xc00112b7c0) (1) Data frame handling
I0204 14:49:18.070771       8 log.go:172] (0xc00112b7c0) (1) Data frame sent
I0204 14:49:18.070966       8 log.go:172] (0xc001310210) (0xc00112b7c0) Stream removed, broadcasting: 1
I0204 14:49:18.071157       8 log.go:172] (0xc001310210) (0xc00112bd60) Stream removed, broadcasting: 5
I0204 14:49:18.071213       8 log.go:172] (0xc001310210) (0xc0015580a0) Stream removed, broadcasting: 3
I0204 14:49:18.071254       8 log.go:172] (0xc001310210) (0xc00112b7c0) Stream removed, broadcasting: 1
I0204 14:49:18.071269       8 log.go:172] (0xc001310210) (0xc0015580a0) Stream removed, broadcasting: 3
I0204 14:49:18.071282       8 log.go:172] (0xc001310210) (0xc00112bd60) Stream removed, broadcasting: 5
I0204 14:49:18.072403       8 log.go:172] (0xc001310210) Go away received
Feb  4 14:49:18.072: INFO: Found all expected endpoints: [netserver-0]
Feb  4 14:49:18.080: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.44.0.1 8081 | grep -v '^\s*$'] Namespace:pod-network-test-5199 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  4 14:49:18.080: INFO: >>> kubeConfig: /root/.kube/config
I0204 14:49:18.131992       8 log.go:172] (0xc000c12a50) (0xc001558e60) Create stream
I0204 14:49:18.132070       8 log.go:172] (0xc000c12a50) (0xc001558e60) Stream added, broadcasting: 1
I0204 14:49:18.136585       8 log.go:172] (0xc000c12a50) Reply frame received for 1
I0204 14:49:18.136609       8 log.go:172] (0xc000c12a50) (0xc001fc6000) Create stream
I0204 14:49:18.136616       8 log.go:172] (0xc000c12a50) (0xc001fc6000) Stream added, broadcasting: 3
I0204 14:49:18.137815       8 log.go:172] (0xc000c12a50) Reply frame received for 3
I0204 14:49:18.137833       8 log.go:172] (0xc000c12a50) (0xc002dfb0e0) Create stream
I0204 14:49:18.137839       8 log.go:172] (0xc000c12a50) (0xc002dfb0e0) Stream added, broadcasting: 5
I0204 14:49:18.138887       8 log.go:172] (0xc000c12a50) Reply frame received for 5
I0204 14:49:19.224647       8 log.go:172] (0xc000c12a50) Data frame received for 3
I0204 14:49:19.224799       8 log.go:172] (0xc001fc6000) (3) Data frame handling
I0204 14:49:19.224846       8 log.go:172] (0xc001fc6000) (3) Data frame sent
I0204 14:49:19.438858       8 log.go:172] (0xc000c12a50) (0xc001fc6000) Stream removed, broadcasting: 3
I0204 14:49:19.439067       8 log.go:172] (0xc000c12a50) Data frame received for 1
I0204 14:49:19.439083       8 log.go:172] (0xc001558e60) (1) Data frame handling
I0204 14:49:19.439108       8 log.go:172] (0xc001558e60) (1) Data frame sent
I0204 14:49:19.439289       8 log.go:172] (0xc000c12a50) (0xc001558e60) Stream removed, broadcasting: 1
I0204 14:49:19.439460       8 log.go:172] (0xc000c12a50) (0xc002dfb0e0) Stream removed, broadcasting: 5
I0204 14:49:19.439575       8 log.go:172] (0xc000c12a50) Go away received
I0204 14:49:19.439666       8 log.go:172] (0xc000c12a50) (0xc001558e60) Stream removed, broadcasting: 1
I0204 14:49:19.439681       8 log.go:172] (0xc000c12a50) (0xc001fc6000) Stream removed, broadcasting: 3
I0204 14:49:19.439693       8 log.go:172] (0xc000c12a50) (0xc002dfb0e0) Stream removed, broadcasting: 5
Feb  4 14:49:19.439: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 14:49:19.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-5199" for this suite.
Feb  4 14:49:43.482: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 14:49:43.569: INFO: namespace pod-network-test-5199 deletion completed in 24.11584182s

• [SLOW TEST:67.807 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  4 14:49:43.569: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-3337
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a new StatefulSet
Feb  4 14:49:43.750: INFO: Found 0 stateful pods, waiting for 3
Feb  4 14:49:53.766: INFO: Found 2 stateful pods, waiting for 3
Feb  4 14:50:03.766: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb  4 14:50:03.766: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb  4 14:50:03.766: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb  4 14:50:13.807: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb  4 14:50:13.807: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb  4 14:50:13.807: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Feb  4 14:50:13.879: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Feb  4 14:50:24.021: INFO: Updating stateful set ss2
Feb  4 14:50:24.100: INFO: Waiting for Pod statefulset-3337/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  4 14:50:34.142: INFO: Waiting for Pod statefulset-3337/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
STEP: Restoring Pods to the correct revision when they are deleted
Feb  4 14:50:44.573: INFO: Found 2 stateful pods, waiting for 3
Feb  4 14:50:54.599: INFO: Found 2 stateful pods, waiting for 3
Feb  4 14:51:04.590: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb  4 14:51:04.590: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb  4 14:51:04.590: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb  4 14:51:14.592: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb  4 14:51:14.592: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb  4 14:51:14.592: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Feb  4 14:51:14.629: INFO: Updating stateful set ss2
Feb  4 14:51:14.719: INFO: Waiting for Pod statefulset-3337/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  4 14:51:25.454: INFO: Updating stateful set ss2
Feb  4 14:51:25.504: INFO: Waiting for StatefulSet statefulset-3337/ss2 to complete update
Feb  4 14:51:25.504: INFO: Waiting for Pod statefulset-3337/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  4 14:51:35.518: INFO: Waiting for StatefulSet statefulset-3337/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Feb  4 14:51:45.533: INFO: Deleting all statefulset in ns statefulset-3337
Feb  4 14:51:45.542: INFO: Scaling statefulset ss2 to 0
Feb  4 14:52:15.588: INFO: Waiting for statefulset status.replicas updated to 0
Feb  4 14:52:15.595: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 14:52:15.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-3337" for this suite.
Feb  4 14:52:23.691: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 14:52:23.917: INFO: namespace statefulset-3337 deletion completed in 8.256240368s

• [SLOW TEST:160.348 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  4 14:52:23.918: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a replication controller
Feb  4 14:52:24.051: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1999'
Feb  4 14:52:24.297: INFO: stderr: ""
Feb  4 14:52:24.298: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb  4 14:52:24.298: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1999'
Feb  4 14:52:25.165: INFO: stderr: ""
Feb  4 14:52:25.165: INFO: stdout: "update-demo-nautilus-4xn5v update-demo-nautilus-vc66b "
Feb  4 14:52:25.166: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4xn5v -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1999'
Feb  4 14:52:25.571: INFO: stderr: ""
Feb  4 14:52:25.571: INFO: stdout: ""
Feb  4 14:52:25.571: INFO: update-demo-nautilus-4xn5v is created but not running
Feb  4 14:52:30.572: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1999'
Feb  4 14:52:31.052: INFO: stderr: ""
Feb  4 14:52:31.052: INFO: stdout: "update-demo-nautilus-4xn5v update-demo-nautilus-vc66b "
Feb  4 14:52:31.052: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4xn5v -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1999'
Feb  4 14:52:31.558: INFO: stderr: ""
Feb  4 14:52:31.558: INFO: stdout: ""
Feb  4 14:52:31.558: INFO: update-demo-nautilus-4xn5v is created but not running
Feb  4 14:52:36.559: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1999'
Feb  4 14:52:36.679: INFO: stderr: ""
Feb  4 14:52:36.679: INFO: stdout: "update-demo-nautilus-4xn5v update-demo-nautilus-vc66b "
Feb  4 14:52:36.680: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4xn5v -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1999'
Feb  4 14:52:36.766: INFO: stderr: ""
Feb  4 14:52:36.766: INFO: stdout: "true"
Feb  4 14:52:36.766: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4xn5v -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1999'
Feb  4 14:52:36.887: INFO: stderr: ""
Feb  4 14:52:36.887: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  4 14:52:36.887: INFO: validating pod update-demo-nautilus-4xn5v
Feb  4 14:52:36.897: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  4 14:52:36.897: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  4 14:52:36.897: INFO: update-demo-nautilus-4xn5v is verified up and running
Feb  4 14:52:36.897: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vc66b -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1999'
Feb  4 14:52:36.986: INFO: stderr: ""
Feb  4 14:52:36.986: INFO: stdout: "true"
Feb  4 14:52:36.986: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vc66b -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1999'
Feb  4 14:52:37.088: INFO: stderr: ""
Feb  4 14:52:37.088: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  4 14:52:37.088: INFO: validating pod update-demo-nautilus-vc66b
Feb  4 14:52:37.102: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  4 14:52:37.102: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  4 14:52:37.102: INFO: update-demo-nautilus-vc66b is verified up and running
STEP: using delete to clean up resources
Feb  4 14:52:37.102: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1999'
Feb  4 14:52:37.251: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  4 14:52:37.251: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Feb  4 14:52:37.251: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-1999'
Feb  4 14:52:37.328: INFO: stderr: "No resources found.\n"
Feb  4 14:52:37.328: INFO: stdout: ""
Feb  4 14:52:37.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-1999 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb  4 14:52:37.412: INFO: stderr: ""
Feb  4 14:52:37.412: INFO: stdout: "update-demo-nautilus-4xn5v\nupdate-demo-nautilus-vc66b\n"
Feb  4 14:52:37.912: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-1999'
Feb  4 14:52:38.058: INFO: stderr: "No resources found.\n"
Feb  4 14:52:38.058: INFO: stdout: ""
Feb  4 14:52:38.059: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-1999 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb  4 14:52:39.022: INFO: stderr: ""
Feb  4 14:52:39.023: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 14:52:39.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1999" for this suite.
Feb  4 14:53:01.062: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 14:53:01.178: INFO: namespace kubectl-1999 deletion completed in 22.14921123s

• [SLOW TEST:37.261 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create and stop a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  4 14:53:01.179: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating service multi-endpoint-test in namespace services-3834
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3834 to expose endpoints map[]
Feb  4 14:53:01.516: INFO: successfully validated that service multi-endpoint-test in namespace services-3834 exposes endpoints map[] (29.564368ms elapsed)
STEP: Creating pod pod1 in namespace services-3834
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3834 to expose endpoints map[pod1:[100]]
Feb  4 14:53:05.716: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.174350692s elapsed, will retry)
Feb  4 14:53:10.790: INFO: successfully validated that service multi-endpoint-test in namespace services-3834 exposes endpoints map[pod1:[100]] (9.248584948s elapsed)
STEP: Creating pod pod2 in namespace services-3834
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3834 to expose endpoints map[pod1:[100] pod2:[101]]
Feb  4 14:53:15.565: INFO: Unexpected endpoints: found map[ceceefc3-24e9-4344-8138-db988a54c6d8:[100]], expected map[pod1:[100] pod2:[101]] (4.765054839s elapsed, will retry)
Feb  4 14:53:18.906: INFO: successfully validated that service multi-endpoint-test in namespace services-3834 exposes endpoints map[pod1:[100] pod2:[101]] (8.106057188s elapsed)
STEP: Deleting pod pod1 in namespace services-3834
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3834 to expose endpoints map[pod2:[101]]
Feb  4 14:53:19.010: INFO: successfully validated that service multi-endpoint-test in namespace services-3834 exposes endpoints map[pod2:[101]] (27.074254ms elapsed)
STEP: Deleting pod pod2 in namespace services-3834
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3834 to expose endpoints map[]
Feb  4 14:53:19.106: INFO: successfully validated that service multi-endpoint-test in namespace services-3834 exposes endpoints map[] (26.542535ms elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 14:53:19.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-3834" for this suite.
Feb  4 14:53:41.229: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 14:53:41.324: INFO: namespace services-3834 deletion completed in 22.11729832s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:40.146 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  4 14:53:41.325: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating replication controller my-hostname-basic-7238281f-1e5e-4c8b-a40e-ebb86992ee79
Feb  4 14:53:41.465: INFO: Pod name my-hostname-basic-7238281f-1e5e-4c8b-a40e-ebb86992ee79: Found 0 pods out of 1
Feb  4 14:53:46.478: INFO: Pod name my-hostname-basic-7238281f-1e5e-4c8b-a40e-ebb86992ee79: Found 1 pods out of 1
Feb  4 14:53:46.478: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-7238281f-1e5e-4c8b-a40e-ebb86992ee79" are running
Feb  4 14:53:48.490: INFO: Pod "my-hostname-basic-7238281f-1e5e-4c8b-a40e-ebb86992ee79-twmpk" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-04 14:53:41 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-04 14:53:41 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-7238281f-1e5e-4c8b-a40e-ebb86992ee79]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-04 14:53:41 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-7238281f-1e5e-4c8b-a40e-ebb86992ee79]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-04 14:53:41 +0000 UTC Reason: Message:}])
Feb  4 14:53:48.490: INFO: Trying to dial the pod
Feb  4 14:53:53.525: INFO: Controller my-hostname-basic-7238281f-1e5e-4c8b-a40e-ebb86992ee79: Got expected result from replica 1 [my-hostname-basic-7238281f-1e5e-4c8b-a40e-ebb86992ee79-twmpk]: "my-hostname-basic-7238281f-1e5e-4c8b-a40e-ebb86992ee79-twmpk", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 14:53:53.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-9267" for this suite.
Feb  4 14:53:59.798: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 14:53:59.996: INFO: namespace replication-controller-9267 deletion completed in 6.46273477s

• [SLOW TEST:18.672 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  4 14:53:59.997: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there are no pods in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 14:54:32.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-221" for this suite.
Feb  4 14:54:38.485: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 14:54:38.630: INFO: namespace namespaces-221 deletion completed in 6.243605137s
STEP: Destroying namespace "nsdeletetest-8513" for this suite.
Feb  4 14:54:38.633: INFO: Namespace nsdeletetest-8513 was already deleted
STEP: Destroying namespace "nsdeletetest-5498" for this suite.
Feb  4 14:54:44.737: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 14:54:44.876: INFO: namespace nsdeletetest-5498 deletion completed in 6.242731311s

• [SLOW TEST:44.879 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  4 14:54:44.877: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb  4 14:54:45.033: INFO: Create a RollingUpdate DaemonSet
Feb  4 14:54:45.047: INFO: Check that daemon pods launch on every node of the cluster
Feb  4 14:54:45.117: INFO: Number of nodes with available pods: 0
Feb  4 14:54:45.117: INFO: Node iruya-node is running more than one daemon pod
Feb  4 14:54:46.135: INFO: Number of nodes with available pods: 0
Feb  4 14:54:46.135: INFO: Node iruya-node is running more than one daemon pod
Feb  4 14:54:47.163: INFO: Number of nodes with available pods: 0
Feb  4 14:54:47.163: INFO: Node iruya-node is running more than one daemon pod
Feb  4 14:54:48.132: INFO: Number of nodes with available pods: 0
Feb  4 14:54:48.132: INFO: Node iruya-node is running more than one daemon pod
Feb  4 14:54:49.128: INFO: Number of nodes with available pods: 0
Feb  4 14:54:49.128: INFO: Node iruya-node is running more than one daemon pod
Feb  4 14:54:51.075: INFO: Number of nodes with available pods: 0
Feb  4 14:54:51.075: INFO: Node iruya-node is running more than one daemon pod
Feb  4 14:54:51.480: INFO: Number of nodes with available pods: 0
Feb  4 14:54:51.480: INFO: Node iruya-node is running more than one daemon pod
Feb  4 14:54:52.228: INFO: Number of nodes with available pods: 0
Feb  4 14:54:52.229: INFO: Node iruya-node is running more than one daemon pod
Feb  4 14:54:53.128: INFO: Number of nodes with available pods: 0
Feb  4 14:54:53.129: INFO: Node iruya-node is running more than one daemon pod
Feb  4 14:54:54.129: INFO: Number of nodes with available pods: 1
Feb  4 14:54:54.129: INFO: Node iruya-node is running more than one daemon pod
Feb  4 14:54:55.149: INFO: Number of nodes with available pods: 1
Feb  4 14:54:55.149: INFO: Node iruya-node is running more than one daemon pod
Feb  4 14:54:56.136: INFO: Number of nodes with available pods: 2
Feb  4 14:54:56.136: INFO: Number of running nodes: 2, number of available pods: 2
Feb  4 14:54:56.136: INFO: Update the DaemonSet to trigger a rollout
Feb  4 14:54:56.156: INFO: Updating DaemonSet daemon-set
Feb  4 14:55:07.213: INFO: Roll back the DaemonSet before rollout is complete
Feb  4 14:55:07.236: INFO: Updating DaemonSet daemon-set
Feb  4 14:55:07.236: INFO: Make sure DaemonSet rollback is complete
Feb  4 14:55:07.286: INFO: Wrong image for pod: daemon-set-nklx2. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Feb  4 14:55:07.286: INFO: Pod daemon-set-nklx2 is not available
Feb  4 14:55:08.382: INFO: Wrong image for pod: daemon-set-nklx2. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Feb  4 14:55:08.382: INFO: Pod daemon-set-nklx2 is not available
Feb  4 14:55:10.383: INFO: Pod daemon-set-5j5lj is not available
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1558, will wait for the garbage collector to delete the pods
Feb  4 14:55:10.460: INFO: Deleting DaemonSet.extensions daemon-set took: 12.610929ms
Feb  4 14:55:10.863: INFO: Terminating DaemonSet.extensions daemon-set pods took: 402.979282ms
Feb  4 14:55:27.898: INFO: Number of nodes with available pods: 0
Feb  4 14:55:27.898: INFO: Number of running nodes: 0, number of available pods: 0
Feb  4 14:55:27.904: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1558/daemonsets","resourceVersion":"23082895"},"items":null}

Feb  4 14:55:27.907: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1558/pods","resourceVersion":"23082895"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 14:55:27.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-1558" for this suite.
Feb  4 14:55:33.975: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 14:55:34.104: INFO: namespace daemonsets-1558 deletion completed in 6.179557774s

• [SLOW TEST:49.228 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  4 14:55:34.105: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb  4 14:55:34.208: INFO: Creating ReplicaSet my-hostname-basic-5473be6e-087e-4c00-8ec2-7c5c9bceff14
Feb  4 14:55:34.248: INFO: Pod name my-hostname-basic-5473be6e-087e-4c00-8ec2-7c5c9bceff14: Found 0 pods out of 1
Feb  4 14:55:39.257: INFO: Pod name my-hostname-basic-5473be6e-087e-4c00-8ec2-7c5c9bceff14: Found 1 pods out of 1
Feb  4 14:55:39.257: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-5473be6e-087e-4c00-8ec2-7c5c9bceff14" is running
Feb  4 14:55:47.324: INFO: Pod "my-hostname-basic-5473be6e-087e-4c00-8ec2-7c5c9bceff14-qnggs" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-04 14:55:34 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-04 14:55:34 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-5473be6e-087e-4c00-8ec2-7c5c9bceff14]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-04 14:55:34 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-5473be6e-087e-4c00-8ec2-7c5c9bceff14]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-04 14:55:34 +0000 UTC Reason: Message:}])
Feb  4 14:55:47.324: INFO: Trying to dial the pod
Feb  4 14:55:52.345: INFO: Controller my-hostname-basic-5473be6e-087e-4c00-8ec2-7c5c9bceff14: Got expected result from replica 1 [my-hostname-basic-5473be6e-087e-4c00-8ec2-7c5c9bceff14-qnggs]: "my-hostname-basic-5473be6e-087e-4c00-8ec2-7c5c9bceff14-qnggs", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 14:55:52.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-7177" for this suite.
Feb  4 14:55:58.386: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 14:55:58.510: INFO: namespace replicaset-7177 deletion completed in 6.160898244s

• [SLOW TEST:24.406 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  4 14:55:58.513: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Feb  4 14:55:58.626: INFO: PodSpec: initContainers in spec.initContainers
Feb  4 14:57:05.873: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-fab0af57-ae59-465e-b365-3e5175a087fe", GenerateName:"", Namespace:"init-container-3287", SelfLink:"/api/v1/namespaces/init-container-3287/pods/pod-init-fab0af57-ae59-465e-b365-3e5175a087fe", UID:"91c7e913-33d8-46a4-b40f-899827a744f1", ResourceVersion:"23083110", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63716424958, loc:(*time.Location)(0x7ea48a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"626443263"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-cs5bd", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc000fb60c0), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-cs5bd", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-cs5bd", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-cs5bd", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002f22088), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"iruya-node", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001904000), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002f22110)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002f22130)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc002f22138), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc002f2213c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716424958, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716424958, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716424958, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716424958, loc:(*time.Location)(0x7ea48a0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.3.65", PodIP:"10.44.0.1", StartTime:(*v1.Time)(0xc003036760), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0026f6070)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0026f60e0)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://246bd20f5f288475b355f128ec8061c2febeda0a5f2fa871fd8bb39b080014de"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0030367a0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc003036780), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 14:57:05.875: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-3287" for this suite.
Feb  4 14:57:27.940: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 14:57:28.047: INFO: namespace init-container-3287 deletion completed in 22.145229762s

• [SLOW TEST:89.535 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run default 
  should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  4 14:57:28.048: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1420
[It] should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb  4 14:57:28.108: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-3239'
Feb  4 14:57:30.009: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb  4 14:57:30.010: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created
[AfterEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1426
Feb  4 14:57:32.050: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-3239'
Feb  4 14:57:32.238: INFO: stderr: ""
Feb  4 14:57:32.238: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 14:57:32.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3239" for this suite.
Feb  4 14:57:38.302: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 14:57:38.438: INFO: namespace kubectl-3239 deletion completed in 6.193918354s

• [SLOW TEST:10.390 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create an rc or deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  4 14:57:38.438: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-abc7560e-10c3-46a5-8daa-86a9f9342577
STEP: Creating a pod to test consume configMaps
Feb  4 14:57:38.588: INFO: Waiting up to 5m0s for pod "pod-configmaps-e39e25b4-c2b1-4269-a449-9891acda4299" in namespace "configmap-8015" to be "success or failure"
Feb  4 14:57:38.597: INFO: Pod "pod-configmaps-e39e25b4-c2b1-4269-a449-9891acda4299": Phase="Pending", Reason="", readiness=false. Elapsed: 9.419077ms
Feb  4 14:57:40.613: INFO: Pod "pod-configmaps-e39e25b4-c2b1-4269-a449-9891acda4299": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025161973s
Feb  4 14:57:42.620: INFO: Pod "pod-configmaps-e39e25b4-c2b1-4269-a449-9891acda4299": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032504423s
Feb  4 14:57:44.628: INFO: Pod "pod-configmaps-e39e25b4-c2b1-4269-a449-9891acda4299": Phase="Pending", Reason="", readiness=false. Elapsed: 6.040091545s
Feb  4 14:57:46.641: INFO: Pod "pod-configmaps-e39e25b4-c2b1-4269-a449-9891acda4299": Phase="Pending", Reason="", readiness=false. Elapsed: 8.0529288s
Feb  4 14:57:48.658: INFO: Pod "pod-configmaps-e39e25b4-c2b1-4269-a449-9891acda4299": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.070478157s
STEP: Saw pod success
Feb  4 14:57:48.658: INFO: Pod "pod-configmaps-e39e25b4-c2b1-4269-a449-9891acda4299" satisfied condition "success or failure"
Feb  4 14:57:48.665: INFO: Trying to get logs from node iruya-node pod pod-configmaps-e39e25b4-c2b1-4269-a449-9891acda4299 container configmap-volume-test: 
STEP: delete the pod
Feb  4 14:57:48.745: INFO: Waiting for pod pod-configmaps-e39e25b4-c2b1-4269-a449-9891acda4299 to disappear
Feb  4 14:57:48.758: INFO: Pod pod-configmaps-e39e25b4-c2b1-4269-a449-9891acda4299 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 14:57:48.759: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8015" for this suite.
Feb  4 14:57:54.782: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 14:57:54.912: INFO: namespace configmap-8015 deletion completed in 6.146780116s

• [SLOW TEST:16.474 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  4 14:57:54.913: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 14:58:55.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-6408" for this suite.
Feb  4 14:59:19.109: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 14:59:19.232: INFO: namespace container-probe-6408 deletion completed in 24.189615383s

• [SLOW TEST:84.319 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run rc 
  should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  4 14:59:19.232: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1456
[It] should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb  4 14:59:19.368: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-7730'
Feb  4 14:59:19.527: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb  4 14:59:19.528: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created
STEP: confirm that you can get logs from an rc
Feb  4 14:59:19.564: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-4kh9h]
Feb  4 14:59:19.565: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-4kh9h" in namespace "kubectl-7730" to be "running and ready"
Feb  4 14:59:19.568: INFO: Pod "e2e-test-nginx-rc-4kh9h": Phase="Pending", Reason="", readiness=false. Elapsed: 3.900677ms
Feb  4 14:59:21.580: INFO: Pod "e2e-test-nginx-rc-4kh9h": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015905426s
Feb  4 14:59:23.727: INFO: Pod "e2e-test-nginx-rc-4kh9h": Phase="Pending", Reason="", readiness=false. Elapsed: 4.16226388s
Feb  4 14:59:25.734: INFO: Pod "e2e-test-nginx-rc-4kh9h": Phase="Pending", Reason="", readiness=false. Elapsed: 6.169114265s
Feb  4 14:59:27.744: INFO: Pod "e2e-test-nginx-rc-4kh9h": Phase="Running", Reason="", readiness=true. Elapsed: 8.179196038s
Feb  4 14:59:27.744: INFO: Pod "e2e-test-nginx-rc-4kh9h" satisfied condition "running and ready"
Feb  4 14:59:27.744: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-4kh9h]
Feb  4 14:59:27.744: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=kubectl-7730'
Feb  4 14:59:27.934: INFO: stderr: ""
Feb  4 14:59:27.934: INFO: stdout: ""
[AfterEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1461
Feb  4 14:59:27.934: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-7730'
Feb  4 14:59:28.113: INFO: stderr: ""
Feb  4 14:59:28.113: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 14:59:28.113: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7730" for this suite.
Feb  4 14:59:50.149: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 14:59:50.351: INFO: namespace kubectl-7730 deletion completed in 22.231408429s

• [SLOW TEST:31.119 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create an rc from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  4 14:59:50.351: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1685
[It] should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb  4 14:59:50.435: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-6574'
Feb  4 14:59:50.595: INFO: stderr: ""
Feb  4 14:59:50.596: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod was created
[AfterEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1690
Feb  4 14:59:50.630: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-6574'
Feb  4 14:59:56.549: INFO: stderr: ""
Feb  4 14:59:56.549: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 14:59:56.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6574" for this suite.
Feb  4 15:00:02.593: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 15:00:02.702: INFO: namespace kubectl-6574 deletion completed in 6.141336787s

• [SLOW TEST:12.351 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a pod from an image when restart is Never  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  4 15:00:02.702: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb  4 15:00:02.917: INFO: Waiting up to 5m0s for pod "downwardapi-volume-873896a3-32cc-4371-ba94-8b5d1b8ed97b" in namespace "projected-6494" to be "success or failure"
Feb  4 15:00:02.924: INFO: Pod "downwardapi-volume-873896a3-32cc-4371-ba94-8b5d1b8ed97b": Phase="Pending", Reason="", readiness=false. Elapsed: 7.095566ms
Feb  4 15:00:04.936: INFO: Pod "downwardapi-volume-873896a3-32cc-4371-ba94-8b5d1b8ed97b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018354088s
Feb  4 15:00:06.951: INFO: Pod "downwardapi-volume-873896a3-32cc-4371-ba94-8b5d1b8ed97b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034073512s
Feb  4 15:00:08.960: INFO: Pod "downwardapi-volume-873896a3-32cc-4371-ba94-8b5d1b8ed97b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.042997864s
Feb  4 15:00:10.972: INFO: Pod "downwardapi-volume-873896a3-32cc-4371-ba94-8b5d1b8ed97b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.054532782s
Feb  4 15:00:12.988: INFO: Pod "downwardapi-volume-873896a3-32cc-4371-ba94-8b5d1b8ed97b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.070819193s
STEP: Saw pod success
Feb  4 15:00:12.988: INFO: Pod "downwardapi-volume-873896a3-32cc-4371-ba94-8b5d1b8ed97b" satisfied condition "success or failure"
Feb  4 15:00:12.993: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-873896a3-32cc-4371-ba94-8b5d1b8ed97b container client-container: 
STEP: delete the pod
Feb  4 15:00:13.157: INFO: Waiting for pod downwardapi-volume-873896a3-32cc-4371-ba94-8b5d1b8ed97b to disappear
Feb  4 15:00:13.175: INFO: Pod downwardapi-volume-873896a3-32cc-4371-ba94-8b5d1b8ed97b no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 15:00:13.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6494" for this suite.
Feb  4 15:00:19.269: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 15:00:19.447: INFO: namespace projected-6494 deletion completed in 6.259354096s

• [SLOW TEST:16.745 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  4 15:00:19.447: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Feb  4 15:00:19.560: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Feb  4 15:00:19.578: INFO: Waiting for terminating namespaces to be deleted...
Feb  4 15:00:19.580: INFO: 
Logging pods the kubelet thinks is on node iruya-node before test
Feb  4 15:00:19.591: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded)
Feb  4 15:00:19.591: INFO: 	Container kube-proxy ready: true, restart count 0
Feb  4 15:00:19.591: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded)
Feb  4 15:00:19.591: INFO: 	Container weave ready: true, restart count 0
Feb  4 15:00:19.591: INFO: 	Container weave-npc ready: true, restart count 0
Feb  4 15:00:19.591: INFO: 
Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test
Feb  4 15:00:19.603: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded)
Feb  4 15:00:19.603: INFO: 	Container etcd ready: true, restart count 0
Feb  4 15:00:19.603: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded)
Feb  4 15:00:19.603: INFO: 	Container weave ready: true, restart count 0
Feb  4 15:00:19.603: INFO: 	Container weave-npc ready: true, restart count 0
Feb  4 15:00:19.603: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Feb  4 15:00:19.603: INFO: 	Container coredns ready: true, restart count 0
Feb  4 15:00:19.603: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded)
Feb  4 15:00:19.603: INFO: 	Container kube-controller-manager ready: true, restart count 20
Feb  4 15:00:19.603: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded)
Feb  4 15:00:19.603: INFO: 	Container kube-proxy ready: true, restart count 0
Feb  4 15:00:19.603: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded)
Feb  4 15:00:19.603: INFO: 	Container kube-apiserver ready: true, restart count 0
Feb  4 15:00:19.603: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded)
Feb  4 15:00:19.603: INFO: 	Container kube-scheduler ready: true, restart count 13
Feb  4 15:00:19.603: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Feb  4 15:00:19.603: INFO: 	Container coredns ready: true, restart count 0
[It] validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-d52ff62d-c8ca-4a64-b2ed-c8fd6a088b74 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-d52ff62d-c8ca-4a64-b2ed-c8fd6a088b74 off the node iruya-node
STEP: verifying the node doesn't have the label kubernetes.io/e2e-d52ff62d-c8ca-4a64-b2ed-c8fd6a088b74
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 15:00:39.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-8869" for this suite.
Feb  4 15:00:53.912: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 15:00:54.019: INFO: namespace sched-pred-8869 deletion completed in 14.149732676s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:34.573 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  4 15:00:54.020: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb  4 15:00:54.100: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e92b430e-45d1-4c12-bdc9-c92505fd206e" in namespace "projected-5204" to be "success or failure"
Feb  4 15:00:54.104: INFO: Pod "downwardapi-volume-e92b430e-45d1-4c12-bdc9-c92505fd206e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.449245ms
Feb  4 15:00:56.113: INFO: Pod "downwardapi-volume-e92b430e-45d1-4c12-bdc9-c92505fd206e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012801355s
Feb  4 15:00:58.129: INFO: Pod "downwardapi-volume-e92b430e-45d1-4c12-bdc9-c92505fd206e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028441846s
Feb  4 15:01:00.142: INFO: Pod "downwardapi-volume-e92b430e-45d1-4c12-bdc9-c92505fd206e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041510931s
Feb  4 15:01:02.151: INFO: Pod "downwardapi-volume-e92b430e-45d1-4c12-bdc9-c92505fd206e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.050164125s
Feb  4 15:01:04.165: INFO: Pod "downwardapi-volume-e92b430e-45d1-4c12-bdc9-c92505fd206e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.064975536s
STEP: Saw pod success
Feb  4 15:01:04.166: INFO: Pod "downwardapi-volume-e92b430e-45d1-4c12-bdc9-c92505fd206e" satisfied condition "success or failure"
Feb  4 15:01:04.168: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-e92b430e-45d1-4c12-bdc9-c92505fd206e container client-container: 
STEP: delete the pod
Feb  4 15:01:04.433: INFO: Waiting for pod downwardapi-volume-e92b430e-45d1-4c12-bdc9-c92505fd206e to disappear
Feb  4 15:01:04.506: INFO: Pod downwardapi-volume-e92b430e-45d1-4c12-bdc9-c92505fd206e no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 15:01:04.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5204" for this suite.
Feb  4 15:01:10.551: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 15:01:10.654: INFO: namespace projected-5204 deletion completed in 6.135466909s

• [SLOW TEST:16.634 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  4 15:01:10.655: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb  4 15:01:42.801: INFO: Container started at 2020-02-04 15:01:18 +0000 UTC, pod became ready at 2020-02-04 15:01:41 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 15:01:42.801: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-1276" for this suite.
Feb  4 15:02:06.837: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 15:02:06.950: INFO: namespace container-probe-1276 deletion completed in 24.144967557s

• [SLOW TEST:56.295 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  4 15:02:06.950: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: Gathering metrics
W0204 15:02:09.641669       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb  4 15:02:09.641: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 15:02:09.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-316" for this suite.
Feb  4 15:02:15.678: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 15:02:15.814: INFO: namespace gc-316 deletion completed in 6.166669838s

• [SLOW TEST:8.864 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  4 15:02:15.815: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Feb  4 15:02:36.175: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb  4 15:02:36.185: INFO: Pod pod-with-poststart-http-hook still exists
Feb  4 15:02:38.185: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb  4 15:02:38.195: INFO: Pod pod-with-poststart-http-hook still exists
Feb  4 15:02:40.185: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb  4 15:02:40.197: INFO: Pod pod-with-poststart-http-hook still exists
Feb  4 15:02:42.185: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb  4 15:02:42.193: INFO: Pod pod-with-poststart-http-hook still exists
Feb  4 15:02:44.185: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb  4 15:02:44.226: INFO: Pod pod-with-poststart-http-hook still exists
Feb  4 15:02:46.185: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb  4 15:02:46.195: INFO: Pod pod-with-poststart-http-hook still exists
Feb  4 15:02:48.185: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb  4 15:02:48.193: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 15:02:48.194: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-8443" for this suite.
Feb  4 15:03:10.253: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 15:03:10.395: INFO: namespace container-lifecycle-hook-8443 deletion completed in 22.195332074s

• [SLOW TEST:54.581 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  4 15:03:10.396: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-projected-jb22
STEP: Creating a pod to test atomic-volume-subpath
Feb  4 15:03:10.808: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-jb22" in namespace "subpath-8709" to be "success or failure"
Feb  4 15:03:10.830: INFO: Pod "pod-subpath-test-projected-jb22": Phase="Pending", Reason="", readiness=false. Elapsed: 21.159316ms
Feb  4 15:03:12.839: INFO: Pod "pod-subpath-test-projected-jb22": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029846912s
Feb  4 15:03:15.484: INFO: Pod "pod-subpath-test-projected-jb22": Phase="Pending", Reason="", readiness=false. Elapsed: 4.675501436s
Feb  4 15:03:17.497: INFO: Pod "pod-subpath-test-projected-jb22": Phase="Pending", Reason="", readiness=false. Elapsed: 6.688693065s
Feb  4 15:03:19.510: INFO: Pod "pod-subpath-test-projected-jb22": Phase="Running", Reason="", readiness=true. Elapsed: 8.701582255s
Feb  4 15:03:21.520: INFO: Pod "pod-subpath-test-projected-jb22": Phase="Running", Reason="", readiness=true. Elapsed: 10.711106462s
Feb  4 15:03:23.531: INFO: Pod "pod-subpath-test-projected-jb22": Phase="Running", Reason="", readiness=true. Elapsed: 12.722589237s
Feb  4 15:03:25.538: INFO: Pod "pod-subpath-test-projected-jb22": Phase="Running", Reason="", readiness=true. Elapsed: 14.729745357s
Feb  4 15:03:27.546: INFO: Pod "pod-subpath-test-projected-jb22": Phase="Running", Reason="", readiness=true. Elapsed: 16.737220454s
Feb  4 15:03:29.556: INFO: Pod "pod-subpath-test-projected-jb22": Phase="Running", Reason="", readiness=true. Elapsed: 18.746817611s
Feb  4 15:03:31.568: INFO: Pod "pod-subpath-test-projected-jb22": Phase="Running", Reason="", readiness=true. Elapsed: 20.75891757s
Feb  4 15:03:33.579: INFO: Pod "pod-subpath-test-projected-jb22": Phase="Running", Reason="", readiness=true. Elapsed: 22.770030669s
Feb  4 15:03:35.626: INFO: Pod "pod-subpath-test-projected-jb22": Phase="Running", Reason="", readiness=true. Elapsed: 24.81773733s
Feb  4 15:03:37.635: INFO: Pod "pod-subpath-test-projected-jb22": Phase="Running", Reason="", readiness=true. Elapsed: 26.826338602s
Feb  4 15:03:39.645: INFO: Pod "pod-subpath-test-projected-jb22": Phase="Running", Reason="", readiness=true. Elapsed: 28.836146826s
Feb  4 15:03:41.655: INFO: Pod "pod-subpath-test-projected-jb22": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.845835586s
STEP: Saw pod success
Feb  4 15:03:41.655: INFO: Pod "pod-subpath-test-projected-jb22" satisfied condition "success or failure"
Feb  4 15:03:41.664: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-projected-jb22 container test-container-subpath-projected-jb22: 
STEP: delete the pod
Feb  4 15:03:41.734: INFO: Waiting for pod pod-subpath-test-projected-jb22 to disappear
Feb  4 15:03:41.800: INFO: Pod pod-subpath-test-projected-jb22 no longer exists
STEP: Deleting pod pod-subpath-test-projected-jb22
Feb  4 15:03:41.800: INFO: Deleting pod "pod-subpath-test-projected-jb22" in namespace "subpath-8709"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 15:03:41.807: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-8709" for this suite.
Feb  4 15:03:47.851: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 15:03:48.029: INFO: namespace subpath-8709 deletion completed in 6.215871134s

• [SLOW TEST:37.634 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with projected pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  4 15:03:48.030: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-2c5a3d7b-0d3f-4a78-982b-f1a41c7bb723
STEP: Creating a pod to test consume configMaps
Feb  4 15:03:48.192: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f7781a55-305d-409a-b072-08287385e86b" in namespace "projected-7128" to be "success or failure"
Feb  4 15:03:48.201: INFO: Pod "pod-projected-configmaps-f7781a55-305d-409a-b072-08287385e86b": Phase="Pending", Reason="", readiness=false. Elapsed: 9.582465ms
Feb  4 15:03:50.210: INFO: Pod "pod-projected-configmaps-f7781a55-305d-409a-b072-08287385e86b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018139056s
Feb  4 15:03:52.217: INFO: Pod "pod-projected-configmaps-f7781a55-305d-409a-b072-08287385e86b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024848777s
Feb  4 15:03:54.224: INFO: Pod "pod-projected-configmaps-f7781a55-305d-409a-b072-08287385e86b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.032329624s
Feb  4 15:03:56.231: INFO: Pod "pod-projected-configmaps-f7781a55-305d-409a-b072-08287385e86b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.039365829s
Feb  4 15:03:58.240: INFO: Pod "pod-projected-configmaps-f7781a55-305d-409a-b072-08287385e86b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.047723923s
STEP: Saw pod success
Feb  4 15:03:58.240: INFO: Pod "pod-projected-configmaps-f7781a55-305d-409a-b072-08287385e86b" satisfied condition "success or failure"
Feb  4 15:03:58.244: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-f7781a55-305d-409a-b072-08287385e86b container projected-configmap-volume-test: 
STEP: delete the pod
Feb  4 15:03:58.342: INFO: Waiting for pod pod-projected-configmaps-f7781a55-305d-409a-b072-08287385e86b to disappear
Feb  4 15:03:58.361: INFO: Pod pod-projected-configmaps-f7781a55-305d-409a-b072-08287385e86b no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 15:03:58.361: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7128" for this suite.
Feb  4 15:04:04.420: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 15:04:04.640: INFO: namespace projected-7128 deletion completed in 6.262867744s

• [SLOW TEST:16.610 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  4 15:04:04.641: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb  4 15:04:04.800: INFO: Waiting up to 5m0s for pod "downwardapi-volume-25f553c8-15a1-4136-adfb-9446c93e9471" in namespace "projected-1833" to be "success or failure"
Feb  4 15:04:04.847: INFO: Pod "downwardapi-volume-25f553c8-15a1-4136-adfb-9446c93e9471": Phase="Pending", Reason="", readiness=false. Elapsed: 46.867846ms
Feb  4 15:04:06.869: INFO: Pod "downwardapi-volume-25f553c8-15a1-4136-adfb-9446c93e9471": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069213061s
Feb  4 15:04:08.882: INFO: Pod "downwardapi-volume-25f553c8-15a1-4136-adfb-9446c93e9471": Phase="Pending", Reason="", readiness=false. Elapsed: 4.081649776s
Feb  4 15:04:10.895: INFO: Pod "downwardapi-volume-25f553c8-15a1-4136-adfb-9446c93e9471": Phase="Pending", Reason="", readiness=false. Elapsed: 6.094604777s
Feb  4 15:04:12.904: INFO: Pod "downwardapi-volume-25f553c8-15a1-4136-adfb-9446c93e9471": Phase="Pending", Reason="", readiness=false. Elapsed: 8.103694714s
Feb  4 15:04:14.911: INFO: Pod "downwardapi-volume-25f553c8-15a1-4136-adfb-9446c93e9471": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.111278613s
STEP: Saw pod success
Feb  4 15:04:14.912: INFO: Pod "downwardapi-volume-25f553c8-15a1-4136-adfb-9446c93e9471" satisfied condition "success or failure"
Feb  4 15:04:14.922: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-25f553c8-15a1-4136-adfb-9446c93e9471 container client-container: 
STEP: delete the pod
Feb  4 15:04:15.015: INFO: Waiting for pod downwardapi-volume-25f553c8-15a1-4136-adfb-9446c93e9471 to disappear
Feb  4 15:04:15.023: INFO: Pod downwardapi-volume-25f553c8-15a1-4136-adfb-9446c93e9471 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 15:04:15.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1833" for this suite.
Feb  4 15:04:21.048: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 15:04:21.285: INFO: namespace projected-1833 deletion completed in 6.254813144s

• [SLOW TEST:16.644 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  4 15:04:21.285: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod busybox-07479b3b-8adc-46b1-b1e8-4ce4c947f72c in namespace container-probe-9454
Feb  4 15:04:31.378: INFO: Started pod busybox-07479b3b-8adc-46b1-b1e8-4ce4c947f72c in namespace container-probe-9454
STEP: checking the pod's current state and verifying that restartCount is present
Feb  4 15:04:31.383: INFO: Initial restart count of pod busybox-07479b3b-8adc-46b1-b1e8-4ce4c947f72c is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 15:08:33.446: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-9454" for this suite.
Feb  4 15:08:39.520: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 15:08:39.652: INFO: namespace container-probe-9454 deletion completed in 6.170952584s

• [SLOW TEST:258.368 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  4 15:08:39.653: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb  4 15:08:39.769: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9abe5e65-d4e4-44fb-a945-80d56422f3ff" in namespace "projected-1456" to be "success or failure"
Feb  4 15:08:39.828: INFO: Pod "downwardapi-volume-9abe5e65-d4e4-44fb-a945-80d56422f3ff": Phase="Pending", Reason="", readiness=false. Elapsed: 58.735524ms
Feb  4 15:08:41.836: INFO: Pod "downwardapi-volume-9abe5e65-d4e4-44fb-a945-80d56422f3ff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066983148s
Feb  4 15:08:43.868: INFO: Pod "downwardapi-volume-9abe5e65-d4e4-44fb-a945-80d56422f3ff": Phase="Pending", Reason="", readiness=false. Elapsed: 4.099291264s
Feb  4 15:08:45.885: INFO: Pod "downwardapi-volume-9abe5e65-d4e4-44fb-a945-80d56422f3ff": Phase="Pending", Reason="", readiness=false. Elapsed: 6.115887361s
Feb  4 15:08:47.908: INFO: Pod "downwardapi-volume-9abe5e65-d4e4-44fb-a945-80d56422f3ff": Phase="Pending", Reason="", readiness=false. Elapsed: 8.139259886s
Feb  4 15:08:49.918: INFO: Pod "downwardapi-volume-9abe5e65-d4e4-44fb-a945-80d56422f3ff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.148990966s
STEP: Saw pod success
Feb  4 15:08:49.918: INFO: Pod "downwardapi-volume-9abe5e65-d4e4-44fb-a945-80d56422f3ff" satisfied condition "success or failure"
Feb  4 15:08:49.924: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-9abe5e65-d4e4-44fb-a945-80d56422f3ff container client-container: 
STEP: delete the pod
Feb  4 15:08:50.243: INFO: Waiting for pod downwardapi-volume-9abe5e65-d4e4-44fb-a945-80d56422f3ff to disappear
Feb  4 15:08:50.250: INFO: Pod downwardapi-volume-9abe5e65-d4e4-44fb-a945-80d56422f3ff no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 15:08:50.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1456" for this suite.
Feb  4 15:08:56.338: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 15:08:56.916: INFO: namespace projected-1456 deletion completed in 6.648614599s

• [SLOW TEST:17.263 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  4 15:08:56.916: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-7903
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb  4 15:08:56.995: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Feb  4 15:09:31.271: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7903 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  4 15:09:31.271: INFO: >>> kubeConfig: /root/.kube/config
I0204 15:09:31.361070       8 log.go:172] (0xc000b23760) (0xc000e2abe0) Create stream
I0204 15:09:31.361317       8 log.go:172] (0xc000b23760) (0xc000e2abe0) Stream added, broadcasting: 1
I0204 15:09:31.368585       8 log.go:172] (0xc000b23760) Reply frame received for 1
I0204 15:09:31.368636       8 log.go:172] (0xc000b23760) (0xc002f9db80) Create stream
I0204 15:09:31.368646       8 log.go:172] (0xc000b23760) (0xc002f9db80) Stream added, broadcasting: 3
I0204 15:09:31.370146       8 log.go:172] (0xc000b23760) Reply frame received for 3
I0204 15:09:31.370187       8 log.go:172] (0xc000b23760) (0xc000e2ad20) Create stream
I0204 15:09:31.370201       8 log.go:172] (0xc000b23760) (0xc000e2ad20) Stream added, broadcasting: 5
I0204 15:09:31.374336       8 log.go:172] (0xc000b23760) Reply frame received for 5
I0204 15:09:31.551244       8 log.go:172] (0xc000b23760) Data frame received for 3
I0204 15:09:31.551300       8 log.go:172] (0xc002f9db80) (3) Data frame handling
I0204 15:09:31.551317       8 log.go:172] (0xc002f9db80) (3) Data frame sent
I0204 15:09:31.681506       8 log.go:172] (0xc000b23760) (0xc002f9db80) Stream removed, broadcasting: 3
I0204 15:09:31.681631       8 log.go:172] (0xc000b23760) Data frame received for 1
I0204 15:09:31.681659       8 log.go:172] (0xc000e2abe0) (1) Data frame handling
I0204 15:09:31.681680       8 log.go:172] (0xc000e2abe0) (1) Data frame sent
I0204 15:09:31.681695       8 log.go:172] (0xc000b23760) (0xc000e2ad20) Stream removed, broadcasting: 5
I0204 15:09:31.681749       8 log.go:172] (0xc000b23760) (0xc000e2abe0) Stream removed, broadcasting: 1
I0204 15:09:31.681766       8 log.go:172] (0xc000b23760) Go away received
I0204 15:09:31.682211       8 log.go:172] (0xc000b23760) (0xc000e2abe0) Stream removed, broadcasting: 1
I0204 15:09:31.682263       8 log.go:172] (0xc000b23760) (0xc002f9db80) Stream removed, broadcasting: 3
I0204 15:09:31.682281       8 log.go:172] (0xc000b23760) (0xc000e2ad20) Stream removed, broadcasting: 5
Feb  4 15:09:31.682: INFO: Found all expected endpoints: [netserver-0]
Feb  4 15:09:31.696: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.44.0.1:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7903 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  4 15:09:31.696: INFO: >>> kubeConfig: /root/.kube/config
I0204 15:09:31.756570       8 log.go:172] (0xc000cd2bb0) (0xc000238780) Create stream
I0204 15:09:31.756614       8 log.go:172] (0xc000cd2bb0) (0xc000238780) Stream added, broadcasting: 1
I0204 15:09:31.765167       8 log.go:172] (0xc000cd2bb0) Reply frame received for 1
I0204 15:09:31.765209       8 log.go:172] (0xc000cd2bb0) (0xc002f952c0) Create stream
I0204 15:09:31.765218       8 log.go:172] (0xc000cd2bb0) (0xc002f952c0) Stream added, broadcasting: 3
I0204 15:09:31.768581       8 log.go:172] (0xc000cd2bb0) Reply frame received for 3
I0204 15:09:31.768605       8 log.go:172] (0xc000cd2bb0) (0xc000238aa0) Create stream
I0204 15:09:31.768613       8 log.go:172] (0xc000cd2bb0) (0xc000238aa0) Stream added, broadcasting: 5
I0204 15:09:31.773702       8 log.go:172] (0xc000cd2bb0) Reply frame received for 5
I0204 15:09:31.931893       8 log.go:172] (0xc000cd2bb0) Data frame received for 3
I0204 15:09:31.931996       8 log.go:172] (0xc002f952c0) (3) Data frame handling
I0204 15:09:31.932012       8 log.go:172] (0xc002f952c0) (3) Data frame sent
I0204 15:09:32.097622       8 log.go:172] (0xc000cd2bb0) Data frame received for 1
I0204 15:09:32.097678       8 log.go:172] (0xc000cd2bb0) (0xc000238aa0) Stream removed, broadcasting: 5
I0204 15:09:32.097705       8 log.go:172] (0xc000238780) (1) Data frame handling
I0204 15:09:32.097729       8 log.go:172] (0xc000238780) (1) Data frame sent
I0204 15:09:32.097741       8 log.go:172] (0xc000cd2bb0) (0xc002f952c0) Stream removed, broadcasting: 3
I0204 15:09:32.097778       8 log.go:172] (0xc000cd2bb0) (0xc000238780) Stream removed, broadcasting: 1
I0204 15:09:32.097792       8 log.go:172] (0xc000cd2bb0) Go away received
I0204 15:09:32.098063       8 log.go:172] (0xc000cd2bb0) (0xc000238780) Stream removed, broadcasting: 1
I0204 15:09:32.098302       8 log.go:172] (0xc000cd2bb0) (0xc002f952c0) Stream removed, broadcasting: 3
I0204 15:09:32.098321       8 log.go:172] (0xc000cd2bb0) (0xc000238aa0) Stream removed, broadcasting: 5
Feb  4 15:09:32.098: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 15:09:32.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-7903" for this suite.
Feb  4 15:09:56.147: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 15:09:56.334: INFO: namespace pod-network-test-7903 deletion completed in 24.220493245s

• [SLOW TEST:59.418 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  4 15:09:56.335: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 15:10:06.649: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-658" for this suite.
Feb  4 15:10:12.690: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 15:10:12.803: INFO: namespace emptydir-wrapper-658 deletion completed in 6.135498797s

• [SLOW TEST:16.468 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  4 15:10:12.803: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb  4 15:10:12.883: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8b9ddecd-79e4-4e66-a343-62774fef6aca" in namespace "downward-api-1814" to be "success or failure"
Feb  4 15:10:12.950: INFO: Pod "downwardapi-volume-8b9ddecd-79e4-4e66-a343-62774fef6aca": Phase="Pending", Reason="", readiness=false. Elapsed: 66.43721ms
Feb  4 15:10:14.963: INFO: Pod "downwardapi-volume-8b9ddecd-79e4-4e66-a343-62774fef6aca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.079233307s
Feb  4 15:10:17.794: INFO: Pod "downwardapi-volume-8b9ddecd-79e4-4e66-a343-62774fef6aca": Phase="Pending", Reason="", readiness=false. Elapsed: 4.909978076s
Feb  4 15:10:19.822: INFO: Pod "downwardapi-volume-8b9ddecd-79e4-4e66-a343-62774fef6aca": Phase="Pending", Reason="", readiness=false. Elapsed: 6.938789331s
Feb  4 15:10:21.833: INFO: Pod "downwardapi-volume-8b9ddecd-79e4-4e66-a343-62774fef6aca": Phase="Pending", Reason="", readiness=false. Elapsed: 8.94955451s
Feb  4 15:10:23.841: INFO: Pod "downwardapi-volume-8b9ddecd-79e4-4e66-a343-62774fef6aca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.957244678s
STEP: Saw pod success
Feb  4 15:10:23.841: INFO: Pod "downwardapi-volume-8b9ddecd-79e4-4e66-a343-62774fef6aca" satisfied condition "success or failure"
Feb  4 15:10:23.845: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-8b9ddecd-79e4-4e66-a343-62774fef6aca container client-container: 
STEP: delete the pod
Feb  4 15:10:23.966: INFO: Waiting for pod downwardapi-volume-8b9ddecd-79e4-4e66-a343-62774fef6aca to disappear
Feb  4 15:10:23.976: INFO: Pod downwardapi-volume-8b9ddecd-79e4-4e66-a343-62774fef6aca no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 15:10:23.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1814" for this suite.
Feb  4 15:10:30.036: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 15:10:30.161: INFO: namespace downward-api-1814 deletion completed in 6.180305735s

• [SLOW TEST:17.358 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  4 15:10:30.162: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-aa267307-92f7-4c82-8e30-045d3e8b0e18
STEP: Creating a pod to test consume secrets
Feb  4 15:10:30.280: INFO: Waiting up to 5m0s for pod "pod-secrets-77500251-832e-49f0-9aa6-bdc28dab79b2" in namespace "secrets-856" to be "success or failure"
Feb  4 15:10:30.285: INFO: Pod "pod-secrets-77500251-832e-49f0-9aa6-bdc28dab79b2": Phase="Pending", Reason="", readiness=false. Elapsed: 5.489116ms
Feb  4 15:10:32.293: INFO: Pod "pod-secrets-77500251-832e-49f0-9aa6-bdc28dab79b2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013160575s
Feb  4 15:10:34.301: INFO: Pod "pod-secrets-77500251-832e-49f0-9aa6-bdc28dab79b2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.021220851s
Feb  4 15:10:36.312: INFO: Pod "pod-secrets-77500251-832e-49f0-9aa6-bdc28dab79b2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.031977674s
Feb  4 15:10:38.324: INFO: Pod "pod-secrets-77500251-832e-49f0-9aa6-bdc28dab79b2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.044044219s
Feb  4 15:10:40.333: INFO: Pod "pod-secrets-77500251-832e-49f0-9aa6-bdc28dab79b2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.053434698s
STEP: Saw pod success
Feb  4 15:10:40.333: INFO: Pod "pod-secrets-77500251-832e-49f0-9aa6-bdc28dab79b2" satisfied condition "success or failure"
Feb  4 15:10:40.337: INFO: Trying to get logs from node iruya-node pod pod-secrets-77500251-832e-49f0-9aa6-bdc28dab79b2 container secret-volume-test: 
STEP: delete the pod
Feb  4 15:10:40.430: INFO: Waiting for pod pod-secrets-77500251-832e-49f0-9aa6-bdc28dab79b2 to disappear
Feb  4 15:10:40.446: INFO: Pod pod-secrets-77500251-832e-49f0-9aa6-bdc28dab79b2 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 15:10:40.447: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-856" for this suite.
Feb  4 15:10:46.528: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 15:10:46.640: INFO: namespace secrets-856 deletion completed in 6.181418784s

• [SLOW TEST:16.478 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  4 15:10:46.640: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Feb  4 15:11:10.831: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1619 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  4 15:11:10.831: INFO: >>> kubeConfig: /root/.kube/config
I0204 15:11:10.910209       8 log.go:172] (0xc000ff6210) (0xc00011b680) Create stream
I0204 15:11:10.910283       8 log.go:172] (0xc000ff6210) (0xc00011b680) Stream added, broadcasting: 1
I0204 15:11:10.917131       8 log.go:172] (0xc000ff6210) Reply frame received for 1
I0204 15:11:10.917196       8 log.go:172] (0xc000ff6210) (0xc00198e320) Create stream
I0204 15:11:10.917209       8 log.go:172] (0xc000ff6210) (0xc00198e320) Stream added, broadcasting: 3
I0204 15:11:10.920700       8 log.go:172] (0xc000ff6210) Reply frame received for 3
I0204 15:11:10.920734       8 log.go:172] (0xc000ff6210) (0xc0023dd9a0) Create stream
I0204 15:11:10.920744       8 log.go:172] (0xc000ff6210) (0xc0023dd9a0) Stream added, broadcasting: 5
I0204 15:11:10.922361       8 log.go:172] (0xc000ff6210) Reply frame received for 5
I0204 15:11:11.045066       8 log.go:172] (0xc000ff6210) Data frame received for 3
I0204 15:11:11.045125       8 log.go:172] (0xc00198e320) (3) Data frame handling
I0204 15:11:11.045149       8 log.go:172] (0xc00198e320) (3) Data frame sent
I0204 15:11:11.201296       8 log.go:172] (0xc000ff6210) (0xc00198e320) Stream removed, broadcasting: 3
I0204 15:11:11.201505       8 log.go:172] (0xc000ff6210) Data frame received for 1
I0204 15:11:11.201524       8 log.go:172] (0xc000ff6210) (0xc0023dd9a0) Stream removed, broadcasting: 5
I0204 15:11:11.201549       8 log.go:172] (0xc00011b680) (1) Data frame handling
I0204 15:11:11.201566       8 log.go:172] (0xc00011b680) (1) Data frame sent
I0204 15:11:11.201573       8 log.go:172] (0xc000ff6210) (0xc00011b680) Stream removed, broadcasting: 1
I0204 15:11:11.201583       8 log.go:172] (0xc000ff6210) Go away received
I0204 15:11:11.201862       8 log.go:172] (0xc000ff6210) (0xc00011b680) Stream removed, broadcasting: 1
I0204 15:11:11.201886       8 log.go:172] (0xc000ff6210) (0xc00198e320) Stream removed, broadcasting: 3
I0204 15:11:11.201901       8 log.go:172] (0xc000ff6210) (0xc0023dd9a0) Stream removed, broadcasting: 5
Feb  4 15:11:11.201: INFO: Exec stderr: ""
Feb  4 15:11:11.202: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1619 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  4 15:11:11.202: INFO: >>> kubeConfig: /root/.kube/config
I0204 15:11:11.260213       8 log.go:172] (0xc00116e210) (0xc002f95ea0) Create stream
I0204 15:11:11.260308       8 log.go:172] (0xc00116e210) (0xc002f95ea0) Stream added, broadcasting: 1
I0204 15:11:11.268063       8 log.go:172] (0xc00116e210) Reply frame received for 1
I0204 15:11:11.268155       8 log.go:172] (0xc00116e210) (0xc0020ca0a0) Create stream
I0204 15:11:11.268165       8 log.go:172] (0xc00116e210) (0xc0020ca0a0) Stream added, broadcasting: 3
I0204 15:11:11.269663       8 log.go:172] (0xc00116e210) Reply frame received for 3
I0204 15:11:11.269686       8 log.go:172] (0xc00116e210) (0xc0024ec000) Create stream
I0204 15:11:11.269694       8 log.go:172] (0xc00116e210) (0xc0024ec000) Stream added, broadcasting: 5
I0204 15:11:11.278199       8 log.go:172] (0xc00116e210) Reply frame received for 5
I0204 15:11:11.400567       8 log.go:172] (0xc00116e210) Data frame received for 3
I0204 15:11:11.400713       8 log.go:172] (0xc0020ca0a0) (3) Data frame handling
I0204 15:11:11.400739       8 log.go:172] (0xc0020ca0a0) (3) Data frame sent
I0204 15:11:11.556557       8 log.go:172] (0xc00116e210) Data frame received for 1
I0204 15:11:11.556699       8 log.go:172] (0xc00116e210) (0xc0020ca0a0) Stream removed, broadcasting: 3
I0204 15:11:11.556792       8 log.go:172] (0xc002f95ea0) (1) Data frame handling
I0204 15:11:11.556814       8 log.go:172] (0xc002f95ea0) (1) Data frame sent
I0204 15:11:11.556961       8 log.go:172] (0xc00116e210) (0xc0024ec000) Stream removed, broadcasting: 5
I0204 15:11:11.557057       8 log.go:172] (0xc00116e210) (0xc002f95ea0) Stream removed, broadcasting: 1
I0204 15:11:11.557237       8 log.go:172] (0xc00116e210) Go away received
I0204 15:11:11.557352       8 log.go:172] (0xc00116e210) (0xc002f95ea0) Stream removed, broadcasting: 1
I0204 15:11:11.557384       8 log.go:172] (0xc00116e210) (0xc0020ca0a0) Stream removed, broadcasting: 3
I0204 15:11:11.557406       8 log.go:172] (0xc00116e210) (0xc0024ec000) Stream removed, broadcasting: 5
Feb  4 15:11:11.557: INFO: Exec stderr: ""
Feb  4 15:11:11.557: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1619 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  4 15:11:11.557: INFO: >>> kubeConfig: /root/.kube/config
I0204 15:11:11.626526       8 log.go:172] (0xc00116edc0) (0xc0024ec780) Create stream
I0204 15:11:11.626603       8 log.go:172] (0xc00116edc0) (0xc0024ec780) Stream added, broadcasting: 1
I0204 15:11:11.634186       8 log.go:172] (0xc00116edc0) Reply frame received for 1
I0204 15:11:11.634218       8 log.go:172] (0xc00116edc0) (0xc0021ae000) Create stream
I0204 15:11:11.634228       8 log.go:172] (0xc00116edc0) (0xc0021ae000) Stream added, broadcasting: 3
I0204 15:11:11.636972       8 log.go:172] (0xc00116edc0) Reply frame received for 3
I0204 15:11:11.637118       8 log.go:172] (0xc00116edc0) (0xc0023ddae0) Create stream
I0204 15:11:11.637125       8 log.go:172] (0xc00116edc0) (0xc0023ddae0) Stream added, broadcasting: 5
I0204 15:11:11.638586       8 log.go:172] (0xc00116edc0) Reply frame received for 5
I0204 15:11:11.732271       8 log.go:172] (0xc00116edc0) Data frame received for 3
I0204 15:11:11.732339       8 log.go:172] (0xc0021ae000) (3) Data frame handling
I0204 15:11:11.732352       8 log.go:172] (0xc0021ae000) (3) Data frame sent
I0204 15:11:11.900545       8 log.go:172] (0xc00116edc0) (0xc0023ddae0) Stream removed, broadcasting: 5
I0204 15:11:11.900705       8 log.go:172] (0xc00116edc0) Data frame received for 1
I0204 15:11:11.900735       8 log.go:172] (0xc00116edc0) (0xc0021ae000) Stream removed, broadcasting: 3
I0204 15:11:11.900805       8 log.go:172] (0xc0024ec780) (1) Data frame handling
I0204 15:11:11.900822       8 log.go:172] (0xc0024ec780) (1) Data frame sent
I0204 15:11:11.900829       8 log.go:172] (0xc00116edc0) (0xc0024ec780) Stream removed, broadcasting: 1
I0204 15:11:11.900842       8 log.go:172] (0xc00116edc0) Go away received
I0204 15:11:11.901327       8 log.go:172] (0xc00116edc0) (0xc0024ec780) Stream removed, broadcasting: 1
I0204 15:11:11.901347       8 log.go:172] (0xc00116edc0) (0xc0021ae000) Stream removed, broadcasting: 3
I0204 15:11:11.901359       8 log.go:172] (0xc00116edc0) (0xc0023ddae0) Stream removed, broadcasting: 5
Feb  4 15:11:11.901: INFO: Exec stderr: ""
Feb  4 15:11:11.901: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1619 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  4 15:11:11.901: INFO: >>> kubeConfig: /root/.kube/config
I0204 15:11:11.985083       8 log.go:172] (0xc0009b71e0) (0xc00198eb40) Create stream
I0204 15:11:11.985145       8 log.go:172] (0xc0009b71e0) (0xc00198eb40) Stream added, broadcasting: 1
I0204 15:11:11.991865       8 log.go:172] (0xc0009b71e0) Reply frame received for 1
I0204 15:11:11.991932       8 log.go:172] (0xc0009b71e0) (0xc0021ae0a0) Create stream
I0204 15:11:11.991946       8 log.go:172] (0xc0009b71e0) (0xc0021ae0a0) Stream added, broadcasting: 3
I0204 15:11:11.993398       8 log.go:172] (0xc0009b71e0) Reply frame received for 3
I0204 15:11:11.993429       8 log.go:172] (0xc0009b71e0) (0xc0020ca140) Create stream
I0204 15:11:11.993446       8 log.go:172] (0xc0009b71e0) (0xc0020ca140) Stream added, broadcasting: 5
I0204 15:11:11.994675       8 log.go:172] (0xc0009b71e0) Reply frame received for 5
I0204 15:11:12.096370       8 log.go:172] (0xc0009b71e0) Data frame received for 3
I0204 15:11:12.096415       8 log.go:172] (0xc0021ae0a0) (3) Data frame handling
I0204 15:11:12.096426       8 log.go:172] (0xc0021ae0a0) (3) Data frame sent
I0204 15:11:12.262217       8 log.go:172] (0xc0009b71e0) (0xc0020ca140) Stream removed, broadcasting: 5
I0204 15:11:12.262346       8 log.go:172] (0xc0009b71e0) Data frame received for 1
I0204 15:11:12.262367       8 log.go:172] (0xc00198eb40) (1) Data frame handling
I0204 15:11:12.262395       8 log.go:172] (0xc0009b71e0) (0xc0021ae0a0) Stream removed, broadcasting: 3
I0204 15:11:12.262433       8 log.go:172] (0xc00198eb40) (1) Data frame sent
I0204 15:11:12.262456       8 log.go:172] (0xc0009b71e0) (0xc00198eb40) Stream removed, broadcasting: 1
I0204 15:11:12.262467       8 log.go:172] (0xc0009b71e0) Go away received
I0204 15:11:12.262717       8 log.go:172] (0xc0009b71e0) (0xc00198eb40) Stream removed, broadcasting: 1
I0204 15:11:12.262736       8 log.go:172] (0xc0009b71e0) (0xc0021ae0a0) Stream removed, broadcasting: 3
I0204 15:11:12.262748       8 log.go:172] (0xc0009b71e0) (0xc0020ca140) Stream removed, broadcasting: 5
Feb  4 15:11:12.262: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Feb  4 15:11:12.263: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1619 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  4 15:11:12.263: INFO: >>> kubeConfig: /root/.kube/config
I0204 15:11:12.329812       8 log.go:172] (0xc00111ad10) (0xc0020ca5a0) Create stream
I0204 15:11:12.329906       8 log.go:172] (0xc00111ad10) (0xc0020ca5a0) Stream added, broadcasting: 1
I0204 15:11:12.346808       8 log.go:172] (0xc00111ad10) Reply frame received for 1
I0204 15:11:12.346841       8 log.go:172] (0xc00111ad10) (0xc0023ddb80) Create stream
I0204 15:11:12.346856       8 log.go:172] (0xc00111ad10) (0xc0023ddb80) Stream added, broadcasting: 3
I0204 15:11:12.351470       8 log.go:172] (0xc00111ad10) Reply frame received for 3
I0204 15:11:12.351500       8 log.go:172] (0xc00111ad10) (0xc0024ec8c0) Create stream
I0204 15:11:12.351507       8 log.go:172] (0xc00111ad10) (0xc0024ec8c0) Stream added, broadcasting: 5
I0204 15:11:12.353415       8 log.go:172] (0xc00111ad10) Reply frame received for 5
I0204 15:11:12.479420       8 log.go:172] (0xc00111ad10) Data frame received for 3
I0204 15:11:12.479524       8 log.go:172] (0xc0023ddb80) (3) Data frame handling
I0204 15:11:12.479544       8 log.go:172] (0xc0023ddb80) (3) Data frame sent
I0204 15:11:12.740055       8 log.go:172] (0xc00111ad10) (0xc0023ddb80) Stream removed, broadcasting: 3
I0204 15:11:12.740236       8 log.go:172] (0xc00111ad10) Data frame received for 1
I0204 15:11:12.740268       8 log.go:172] (0xc0020ca5a0) (1) Data frame handling
I0204 15:11:12.740305       8 log.go:172] (0xc0020ca5a0) (1) Data frame sent
I0204 15:11:12.740325       8 log.go:172] (0xc00111ad10) (0xc0020ca5a0) Stream removed, broadcasting: 1
I0204 15:11:12.740344       8 log.go:172] (0xc00111ad10) (0xc0024ec8c0) Stream removed, broadcasting: 5
I0204 15:11:12.740364       8 log.go:172] (0xc00111ad10) Go away received
I0204 15:11:12.740662       8 log.go:172] (0xc00111ad10) (0xc0020ca5a0) Stream removed, broadcasting: 1
I0204 15:11:12.740685       8 log.go:172] (0xc00111ad10) (0xc0023ddb80) Stream removed, broadcasting: 3
I0204 15:11:12.740690       8 log.go:172] (0xc00111ad10) (0xc0024ec8c0) Stream removed, broadcasting: 5
Feb  4 15:11:12.740: INFO: Exec stderr: ""
Feb  4 15:11:12.741: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1619 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  4 15:11:12.741: INFO: >>> kubeConfig: /root/.kube/config
I0204 15:11:12.808441       8 log.go:172] (0xc00138cc60) (0xc0016fc460) Create stream
I0204 15:11:12.808545       8 log.go:172] (0xc00138cc60) (0xc0016fc460) Stream added, broadcasting: 1
I0204 15:11:12.814949       8 log.go:172] (0xc00138cc60) Reply frame received for 1
I0204 15:11:12.814977       8 log.go:172] (0xc00138cc60) (0xc0020ca640) Create stream
I0204 15:11:12.814985       8 log.go:172] (0xc00138cc60) (0xc0020ca640) Stream added, broadcasting: 3
I0204 15:11:12.817105       8 log.go:172] (0xc00138cc60) Reply frame received for 3
I0204 15:11:12.817122       8 log.go:172] (0xc00138cc60) (0xc0016fc780) Create stream
I0204 15:11:12.817126       8 log.go:172] (0xc00138cc60) (0xc0016fc780) Stream added, broadcasting: 5
I0204 15:11:12.818448       8 log.go:172] (0xc00138cc60) Reply frame received for 5
I0204 15:11:12.900096       8 log.go:172] (0xc00138cc60) Data frame received for 3
I0204 15:11:12.900165       8 log.go:172] (0xc0020ca640) (3) Data frame handling
I0204 15:11:12.900180       8 log.go:172] (0xc0020ca640) (3) Data frame sent
I0204 15:11:13.031451       8 log.go:172] (0xc00138cc60) (0xc0020ca640) Stream removed, broadcasting: 3
I0204 15:11:13.031627       8 log.go:172] (0xc00138cc60) Data frame received for 1
I0204 15:11:13.031677       8 log.go:172] (0xc0016fc460) (1) Data frame handling
I0204 15:11:13.031703       8 log.go:172] (0xc0016fc460) (1) Data frame sent
I0204 15:11:13.031733       8 log.go:172] (0xc00138cc60) (0xc0016fc460) Stream removed, broadcasting: 1
I0204 15:11:13.031789       8 log.go:172] (0xc00138cc60) (0xc0016fc780) Stream removed, broadcasting: 5
I0204 15:11:13.031819       8 log.go:172] (0xc00138cc60) Go away received
I0204 15:11:13.031976       8 log.go:172] (0xc00138cc60) (0xc0016fc460) Stream removed, broadcasting: 1
I0204 15:11:13.031986       8 log.go:172] (0xc00138cc60) (0xc0020ca640) Stream removed, broadcasting: 3
I0204 15:11:13.031990       8 log.go:172] (0xc00138cc60) (0xc0016fc780) Stream removed, broadcasting: 5
Feb  4 15:11:13.032: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Feb  4 15:11:13.032: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1619 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  4 15:11:13.032: INFO: >>> kubeConfig: /root/.kube/config
I0204 15:11:13.096707       8 log.go:172] (0xc000ff7c30) (0xc0021ae500) Create stream
I0204 15:11:13.096752       8 log.go:172] (0xc000ff7c30) (0xc0021ae500) Stream added, broadcasting: 1
I0204 15:11:13.103327       8 log.go:172] (0xc000ff7c30) Reply frame received for 1
I0204 15:11:13.103463       8 log.go:172] (0xc000ff7c30) (0xc0020ca820) Create stream
I0204 15:11:13.103477       8 log.go:172] (0xc000ff7c30) (0xc0020ca820) Stream added, broadcasting: 3
I0204 15:11:13.105208       8 log.go:172] (0xc000ff7c30) Reply frame received for 3
I0204 15:11:13.105238       8 log.go:172] (0xc000ff7c30) (0xc0016fc8c0) Create stream
I0204 15:11:13.105250       8 log.go:172] (0xc000ff7c30) (0xc0016fc8c0) Stream added, broadcasting: 5
I0204 15:11:13.107676       8 log.go:172] (0xc000ff7c30) Reply frame received for 5
I0204 15:11:13.203419       8 log.go:172] (0xc000ff7c30) Data frame received for 3
I0204 15:11:13.203598       8 log.go:172] (0xc0020ca820) (3) Data frame handling
I0204 15:11:13.203670       8 log.go:172] (0xc0020ca820) (3) Data frame sent
I0204 15:11:13.338650       8 log.go:172] (0xc000ff7c30) Data frame received for 1
I0204 15:11:13.338753       8 log.go:172] (0xc000ff7c30) (0xc0020ca820) Stream removed, broadcasting: 3
I0204 15:11:13.338828       8 log.go:172] (0xc0021ae500) (1) Data frame handling
I0204 15:11:13.338863       8 log.go:172] (0xc0021ae500) (1) Data frame sent
I0204 15:11:13.338887       8 log.go:172] (0xc000ff7c30) (0xc0021ae500) Stream removed, broadcasting: 1
I0204 15:11:13.339468       8 log.go:172] (0xc000ff7c30) (0xc0016fc8c0) Stream removed, broadcasting: 5
I0204 15:11:13.339557       8 log.go:172] (0xc000ff7c30) (0xc0021ae500) Stream removed, broadcasting: 1
I0204 15:11:13.339570       8 log.go:172] (0xc000ff7c30) (0xc0020ca820) Stream removed, broadcasting: 3
I0204 15:11:13.339626       8 log.go:172] (0xc000ff7c30) (0xc0016fc8c0) Stream removed, broadcasting: 5
I0204 15:11:13.339675       8 log.go:172] (0xc000ff7c30) Go away received
Feb  4 15:11:13.340: INFO: Exec stderr: ""
Feb  4 15:11:13.340: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1619 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  4 15:11:13.340: INFO: >>> kubeConfig: /root/.kube/config
I0204 15:11:13.407715       8 log.go:172] (0xc00138de40) (0xc0016fcd20) Create stream
I0204 15:11:13.407892       8 log.go:172] (0xc00138de40) (0xc0016fcd20) Stream added, broadcasting: 1
I0204 15:11:13.414955       8 log.go:172] (0xc00138de40) Reply frame received for 1
I0204 15:11:13.415012       8 log.go:172] (0xc00138de40) (0xc0024eca00) Create stream
I0204 15:11:13.415034       8 log.go:172] (0xc00138de40) (0xc0024eca00) Stream added, broadcasting: 3
I0204 15:11:13.416789       8 log.go:172] (0xc00138de40) Reply frame received for 3
I0204 15:11:13.416827       8 log.go:172] (0xc00138de40) (0xc0024ecaa0) Create stream
I0204 15:11:13.416838       8 log.go:172] (0xc00138de40) (0xc0024ecaa0) Stream added, broadcasting: 5
I0204 15:11:13.421483       8 log.go:172] (0xc00138de40) Reply frame received for 5
I0204 15:11:13.527125       8 log.go:172] (0xc00138de40) Data frame received for 3
I0204 15:11:13.527212       8 log.go:172] (0xc0024eca00) (3) Data frame handling
I0204 15:11:13.527237       8 log.go:172] (0xc0024eca00) (3) Data frame sent
I0204 15:11:13.721080       8 log.go:172] (0xc00138de40) Data frame received for 1
I0204 15:11:13.721258       8 log.go:172] (0xc00138de40) (0xc0024eca00) Stream removed, broadcasting: 3
I0204 15:11:13.721353       8 log.go:172] (0xc0016fcd20) (1) Data frame handling
I0204 15:11:13.721398       8 log.go:172] (0xc0016fcd20) (1) Data frame sent
I0204 15:11:13.721409       8 log.go:172] (0xc00138de40) (0xc0016fcd20) Stream removed, broadcasting: 1
I0204 15:11:13.721499       8 log.go:172] (0xc00138de40) (0xc0024ecaa0) Stream removed, broadcasting: 5
I0204 15:11:13.721568       8 log.go:172] (0xc00138de40) Go away received
I0204 15:11:13.721650       8 log.go:172] (0xc00138de40) (0xc0016fcd20) Stream removed, broadcasting: 1
I0204 15:11:13.721663       8 log.go:172] (0xc00138de40) (0xc0024eca00) Stream removed, broadcasting: 3
I0204 15:11:13.721670       8 log.go:172] (0xc00138de40) (0xc0024ecaa0) Stream removed, broadcasting: 5
Feb  4 15:11:13.721: INFO: Exec stderr: ""
Feb  4 15:11:13.721: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1619 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  4 15:11:13.721: INFO: >>> kubeConfig: /root/.kube/config
I0204 15:11:13.803164       8 log.go:172] (0xc00211ab00) (0xc0016fd9a0) Create stream
I0204 15:11:13.803267       8 log.go:172] (0xc00211ab00) (0xc0016fd9a0) Stream added, broadcasting: 1
I0204 15:11:13.814650       8 log.go:172] (0xc00211ab00) Reply frame received for 1
I0204 15:11:13.814758       8 log.go:172] (0xc00211ab00) (0xc0024ecb40) Create stream
I0204 15:11:13.814769       8 log.go:172] (0xc00211ab00) (0xc0024ecb40) Stream added, broadcasting: 3
I0204 15:11:13.817744       8 log.go:172] (0xc00211ab00) Reply frame received for 3
I0204 15:11:13.817817       8 log.go:172] (0xc00211ab00) (0xc0020ca8c0) Create stream
I0204 15:11:13.817863       8 log.go:172] (0xc00211ab00) (0xc0020ca8c0) Stream added, broadcasting: 5
I0204 15:11:13.821344       8 log.go:172] (0xc00211ab00) Reply frame received for 5
I0204 15:11:13.945055       8 log.go:172] (0xc00211ab00) Data frame received for 3
I0204 15:11:13.945606       8 log.go:172] (0xc0024ecb40) (3) Data frame handling
I0204 15:11:13.945685       8 log.go:172] (0xc0024ecb40) (3) Data frame sent
I0204 15:11:14.163331       8 log.go:172] (0xc00211ab00) (0xc0024ecb40) Stream removed, broadcasting: 3
I0204 15:11:14.163494       8 log.go:172] (0xc00211ab00) Data frame received for 1
I0204 15:11:14.163514       8 log.go:172] (0xc0016fd9a0) (1) Data frame handling
I0204 15:11:14.163531       8 log.go:172] (0xc0016fd9a0) (1) Data frame sent
I0204 15:11:14.163590       8 log.go:172] (0xc00211ab00) (0xc0016fd9a0) Stream removed, broadcasting: 1
I0204 15:11:14.163847       8 log.go:172] (0xc00211ab00) (0xc0020ca8c0) Stream removed, broadcasting: 5
I0204 15:11:14.163892       8 log.go:172] (0xc00211ab00) (0xc0016fd9a0) Stream removed, broadcasting: 1
I0204 15:11:14.163904       8 log.go:172] (0xc00211ab00) (0xc0024ecb40) Stream removed, broadcasting: 3
I0204 15:11:14.163913       8 log.go:172] (0xc00211ab00) (0xc0020ca8c0) Stream removed, broadcasting: 5
I0204 15:11:14.164132       8 log.go:172] (0xc00211ab00) Go away received
Feb  4 15:11:14.164: INFO: Exec stderr: ""
Feb  4 15:11:14.164: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1619 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  4 15:11:14.164: INFO: >>> kubeConfig: /root/.kube/config
I0204 15:11:14.250017       8 log.go:172] (0xc001f84790) (0xc0021aeb40) Create stream
I0204 15:11:14.250092       8 log.go:172] (0xc001f84790) (0xc0021aeb40) Stream added, broadcasting: 1
I0204 15:11:14.262623       8 log.go:172] (0xc001f84790) Reply frame received for 1
I0204 15:11:14.262743       8 log.go:172] (0xc001f84790) (0xc0016fde00) Create stream
I0204 15:11:14.262757       8 log.go:172] (0xc001f84790) (0xc0016fde00) Stream added, broadcasting: 3
I0204 15:11:14.264896       8 log.go:172] (0xc001f84790) Reply frame received for 3
I0204 15:11:14.264928       8 log.go:172] (0xc001f84790) (0xc0020ca960) Create stream
I0204 15:11:14.264938       8 log.go:172] (0xc001f84790) (0xc0020ca960) Stream added, broadcasting: 5
I0204 15:11:14.266301       8 log.go:172] (0xc001f84790) Reply frame received for 5
I0204 15:11:14.383662       8 log.go:172] (0xc001f84790) Data frame received for 3
I0204 15:11:14.383793       8 log.go:172] (0xc0016fde00) (3) Data frame handling
I0204 15:11:14.383807       8 log.go:172] (0xc0016fde00) (3) Data frame sent
I0204 15:11:14.573946       8 log.go:172] (0xc001f84790) Data frame received for 1
I0204 15:11:14.574090       8 log.go:172] (0xc0021aeb40) (1) Data frame handling
I0204 15:11:14.574128       8 log.go:172] (0xc0021aeb40) (1) Data frame sent
I0204 15:11:14.574605       8 log.go:172] (0xc001f84790) (0xc0021aeb40) Stream removed, broadcasting: 1
I0204 15:11:14.574987       8 log.go:172] (0xc001f84790) (0xc0016fde00) Stream removed, broadcasting: 3
I0204 15:11:14.575092       8 log.go:172] (0xc001f84790) (0xc0020ca960) Stream removed, broadcasting: 5
I0204 15:11:14.575122       8 log.go:172] (0xc001f84790) Go away received
I0204 15:11:14.575146       8 log.go:172] (0xc001f84790) (0xc0021aeb40) Stream removed, broadcasting: 1
I0204 15:11:14.575235       8 log.go:172] (0xc001f84790) (0xc0016fde00) Stream removed, broadcasting: 3
I0204 15:11:14.575347       8 log.go:172] (0xc001f84790) (0xc0020ca960) Stream removed, broadcasting: 5
Feb  4 15:11:14.575: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 15:11:14.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-kubelet-etc-hosts-1619" for this suite.
Feb  4 15:12:16.641: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 15:12:16.751: INFO: namespace e2e-kubelet-etc-hosts-1619 deletion completed in 1m2.163362429s

• [SLOW TEST:90.111 seconds]
[k8s.io] KubeletManagedEtcHosts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  4 15:12:16.752: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-7844
[It] Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-7844
STEP: Creating statefulset with conflicting port in namespace statefulset-7844
STEP: Waiting until pod test-pod will start running in namespace statefulset-7844
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-7844
Feb  4 15:12:27.081: INFO: Observed stateful pod in namespace: statefulset-7844, name: ss-0, uid: 6ff50c6e-247b-437c-ae07-65dca475a2ec, status phase: Pending. Waiting for statefulset controller to delete.
Feb  4 15:12:27.141: INFO: Observed stateful pod in namespace: statefulset-7844, name: ss-0, uid: 6ff50c6e-247b-437c-ae07-65dca475a2ec, status phase: Failed. Waiting for statefulset controller to delete.
Feb  4 15:12:27.152: INFO: Observed stateful pod in namespace: statefulset-7844, name: ss-0, uid: 6ff50c6e-247b-437c-ae07-65dca475a2ec, status phase: Failed. Waiting for statefulset controller to delete.
Feb  4 15:12:27.206: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-7844
STEP: Removing pod with conflicting port in namespace statefulset-7844
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-7844 and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Feb  4 15:12:39.890: INFO: Deleting all statefulset in ns statefulset-7844
Feb  4 15:12:39.896: INFO: Scaling statefulset ss to 0
Feb  4 15:12:50.017: INFO: Waiting for statefulset status.replicas updated to 0
Feb  4 15:12:50.020: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 15:12:50.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-7844" for this suite.
Feb  4 15:12:56.137: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 15:12:56.329: INFO: namespace statefulset-7844 deletion completed in 6.281222651s

• [SLOW TEST:39.577 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Should recreate evicted statefulset [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  4 15:12:56.330: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-5xn7d in namespace proxy-6671
I0204 15:12:56.766567       8 runners.go:180] Created replication controller with name: proxy-service-5xn7d, namespace: proxy-6671, replica count: 1
I0204 15:12:57.817611       8 runners.go:180] proxy-service-5xn7d Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0204 15:12:58.818063       8 runners.go:180] proxy-service-5xn7d Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0204 15:12:59.818829       8 runners.go:180] proxy-service-5xn7d Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0204 15:13:00.819349       8 runners.go:180] proxy-service-5xn7d Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0204 15:13:01.819722       8 runners.go:180] proxy-service-5xn7d Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0204 15:13:02.820276       8 runners.go:180] proxy-service-5xn7d Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0204 15:13:03.821029       8 runners.go:180] proxy-service-5xn7d Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0204 15:13:04.822037       8 runners.go:180] proxy-service-5xn7d Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0204 15:13:05.822758       8 runners.go:180] proxy-service-5xn7d Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0204 15:13:06.823189       8 runners.go:180] proxy-service-5xn7d Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0204 15:13:07.823971       8 runners.go:180] proxy-service-5xn7d Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0204 15:13:08.824784       8 runners.go:180] proxy-service-5xn7d Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0204 15:13:09.825566       8 runners.go:180] proxy-service-5xn7d Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0204 15:13:10.826428       8 runners.go:180] proxy-service-5xn7d Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0204 15:13:11.827012       8 runners.go:180] proxy-service-5xn7d Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0204 15:13:12.827495       8 runners.go:180] proxy-service-5xn7d Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0204 15:13:13.827929       8 runners.go:180] proxy-service-5xn7d Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Feb  4 15:13:13.835: INFO: setup took 17.272764475s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Feb  4 15:13:13.896: INFO: (0) /api/v1/namespaces/proxy-6671/pods/proxy-service-5xn7d-z7t54:1080/proxy/: test<... (200; 60.960146ms)
Feb  4 15:13:13.896: INFO: (0) /api/v1/namespaces/proxy-6671/pods/proxy-service-5xn7d-z7t54:162/proxy/: bar (200; 60.775956ms)
Feb  4 15:13:13.897: INFO: (0) /api/v1/namespaces/proxy-6671/pods/proxy-service-5xn7d-z7t54/proxy/: test (200; 60.976738ms)
Feb  4 15:13:13.897: INFO: (0) /api/v1/namespaces/proxy-6671/pods/http:proxy-service-5xn7d-z7t54:162/proxy/: bar (200; 61.092234ms)
Feb  4 15:13:13.897: INFO: (0) /api/v1/namespaces/proxy-6671/pods/http:proxy-service-5xn7d-z7t54:1080/proxy/: ... (200; 61.038151ms)
Feb  4 15:13:13.897: INFO: (0) /api/v1/namespaces/proxy-6671/services/proxy-service-5xn7d:portname1/proxy/: foo (200; 60.973487ms)
Feb  4 15:13:13.906: INFO: (0) /api/v1/namespaces/proxy-6671/services/http:proxy-service-5xn7d:portname2/proxy/: bar (200; 70.829597ms)
Feb  4 15:13:13.907: INFO: (0) /api/v1/namespaces/proxy-6671/services/proxy-service-5xn7d:portname2/proxy/: bar (200; 71.145403ms)
Feb  4 15:13:13.907: INFO: (0) /api/v1/namespaces/proxy-6671/pods/proxy-service-5xn7d-z7t54:160/proxy/: foo (200; 71.180503ms)
Feb  4 15:13:13.907: INFO: (0) /api/v1/namespaces/proxy-6671/services/http:proxy-service-5xn7d:portname1/proxy/: foo (200; 71.174367ms)
Feb  4 15:13:13.911: INFO: (0) /api/v1/namespaces/proxy-6671/pods/http:proxy-service-5xn7d-z7t54:160/proxy/: foo (200; 75.005792ms)
Feb  4 15:13:13.918: INFO: (0) /api/v1/namespaces/proxy-6671/services/https:proxy-service-5xn7d:tlsportname2/proxy/: tls qux (200; 82.704413ms)
Feb  4 15:13:13.918: INFO: (0) /api/v1/namespaces/proxy-6671/services/https:proxy-service-5xn7d:tlsportname1/proxy/: tls baz (200; 82.631019ms)
Feb  4 15:13:13.918: INFO: (0) /api/v1/namespaces/proxy-6671/pods/https:proxy-service-5xn7d-z7t54:443/proxy/: ... (200; 16.529478ms)
Feb  4 15:13:13.936: INFO: (1) /api/v1/namespaces/proxy-6671/pods/http:proxy-service-5xn7d-z7t54:160/proxy/: foo (200; 15.905467ms)
Feb  4 15:13:13.936: INFO: (1) /api/v1/namespaces/proxy-6671/pods/http:proxy-service-5xn7d-z7t54:162/proxy/: bar (200; 16.618904ms)
Feb  4 15:13:13.936: INFO: (1) /api/v1/namespaces/proxy-6671/pods/proxy-service-5xn7d-z7t54:162/proxy/: bar (200; 16.260218ms)
Feb  4 15:13:13.936: INFO: (1) /api/v1/namespaces/proxy-6671/pods/proxy-service-5xn7d-z7t54/proxy/: test (200; 16.041622ms)
Feb  4 15:13:13.936: INFO: (1) /api/v1/namespaces/proxy-6671/pods/proxy-service-5xn7d-z7t54:1080/proxy/: test<... (200; 16.892637ms)
Feb  4 15:13:13.938: INFO: (1) /api/v1/namespaces/proxy-6671/pods/https:proxy-service-5xn7d-z7t54:462/proxy/: tls qux (200; 17.557552ms)
Feb  4 15:13:13.938: INFO: (1) /api/v1/namespaces/proxy-6671/pods/proxy-service-5xn7d-z7t54:160/proxy/: foo (200; 17.954923ms)
Feb  4 15:13:13.938: INFO: (1) /api/v1/namespaces/proxy-6671/services/https:proxy-service-5xn7d:tlsportname2/proxy/: tls qux (200; 18.656872ms)
Feb  4 15:13:13.939: INFO: (1) /api/v1/namespaces/proxy-6671/pods/https:proxy-service-5xn7d-z7t54:443/proxy/: ... (200; 17.471182ms)
Feb  4 15:13:13.959: INFO: (2) /api/v1/namespaces/proxy-6671/pods/https:proxy-service-5xn7d-z7t54:462/proxy/: tls qux (200; 17.844082ms)
Feb  4 15:13:13.959: INFO: (2) /api/v1/namespaces/proxy-6671/pods/proxy-service-5xn7d-z7t54:1080/proxy/: test<... (200; 17.877651ms)
Feb  4 15:13:13.959: INFO: (2) /api/v1/namespaces/proxy-6671/pods/proxy-service-5xn7d-z7t54:162/proxy/: bar (200; 17.376633ms)
Feb  4 15:13:13.959: INFO: (2) /api/v1/namespaces/proxy-6671/pods/proxy-service-5xn7d-z7t54/proxy/: test (200; 18.702353ms)
Feb  4 15:13:13.960: INFO: (2) /api/v1/namespaces/proxy-6671/pods/http:proxy-service-5xn7d-z7t54:160/proxy/: foo (200; 18.85132ms)
Feb  4 15:13:13.960: INFO: (2) /api/v1/namespaces/proxy-6671/pods/http:proxy-service-5xn7d-z7t54:162/proxy/: bar (200; 18.38478ms)
Feb  4 15:13:13.960: INFO: (2) /api/v1/namespaces/proxy-6671/pods/https:proxy-service-5xn7d-z7t54:460/proxy/: tls baz (200; 18.735964ms)
Feb  4 15:13:13.961: INFO: (2) /api/v1/namespaces/proxy-6671/services/proxy-service-5xn7d:portname1/proxy/: foo (200; 19.764055ms)
Feb  4 15:13:13.961: INFO: (2) /api/v1/namespaces/proxy-6671/services/https:proxy-service-5xn7d:tlsportname2/proxy/: tls qux (200; 20.374719ms)
Feb  4 15:13:13.962: INFO: (2) /api/v1/namespaces/proxy-6671/services/https:proxy-service-5xn7d:tlsportname1/proxy/: tls baz (200; 20.005509ms)
Feb  4 15:13:13.963: INFO: (2) /api/v1/namespaces/proxy-6671/services/proxy-service-5xn7d:portname2/proxy/: bar (200; 21.659067ms)
Feb  4 15:13:13.979: INFO: (3) /api/v1/namespaces/proxy-6671/pods/http:proxy-service-5xn7d-z7t54:1080/proxy/: ... (200; 15.228755ms)
Feb  4 15:13:13.980: INFO: (3) /api/v1/namespaces/proxy-6671/pods/proxy-service-5xn7d-z7t54/proxy/: test (200; 16.624238ms)
Feb  4 15:13:13.980: INFO: (3) /api/v1/namespaces/proxy-6671/services/proxy-service-5xn7d:portname1/proxy/: foo (200; 16.597099ms)
Feb  4 15:13:13.980: INFO: (3) /api/v1/namespaces/proxy-6671/pods/proxy-service-5xn7d-z7t54:160/proxy/: foo (200; 16.638868ms)
Feb  4 15:13:13.980: INFO: (3) /api/v1/namespaces/proxy-6671/pods/http:proxy-service-5xn7d-z7t54:160/proxy/: foo (200; 16.64952ms)
Feb  4 15:13:13.980: INFO: (3) /api/v1/namespaces/proxy-6671/services/proxy-service-5xn7d:portname2/proxy/: bar (200; 16.98131ms)
Feb  4 15:13:13.980: INFO: (3) /api/v1/namespaces/proxy-6671/pods/http:proxy-service-5xn7d-z7t54:162/proxy/: bar (200; 16.629144ms)
Feb  4 15:13:13.981: INFO: (3) /api/v1/namespaces/proxy-6671/pods/https:proxy-service-5xn7d-z7t54:462/proxy/: tls qux (200; 16.954714ms)
Feb  4 15:13:13.982: INFO: (3) /api/v1/namespaces/proxy-6671/pods/proxy-service-5xn7d-z7t54:162/proxy/: bar (200; 17.738888ms)
Feb  4 15:13:13.982: INFO: (3) /api/v1/namespaces/proxy-6671/services/https:proxy-service-5xn7d:tlsportname2/proxy/: tls qux (200; 18.038246ms)
Feb  4 15:13:13.983: INFO: (3) /api/v1/namespaces/proxy-6671/pods/https:proxy-service-5xn7d-z7t54:460/proxy/: tls baz (200; 19.516315ms)
Feb  4 15:13:13.983: INFO: (3) /api/v1/namespaces/proxy-6671/pods/https:proxy-service-5xn7d-z7t54:443/proxy/: test<... (200; 20.773557ms)
Feb  4 15:13:14.003: INFO: (4) /api/v1/namespaces/proxy-6671/pods/http:proxy-service-5xn7d-z7t54:160/proxy/: foo (200; 18.509955ms)
Feb  4 15:13:14.003: INFO: (4) /api/v1/namespaces/proxy-6671/pods/proxy-service-5xn7d-z7t54/proxy/: test (200; 18.537784ms)
Feb  4 15:13:14.003: INFO: (4) /api/v1/namespaces/proxy-6671/services/proxy-service-5xn7d:portname1/proxy/: foo (200; 18.860031ms)
Feb  4 15:13:14.003: INFO: (4) /api/v1/namespaces/proxy-6671/services/http:proxy-service-5xn7d:portname2/proxy/: bar (200; 18.67283ms)
Feb  4 15:13:14.005: INFO: (4) /api/v1/namespaces/proxy-6671/pods/https:proxy-service-5xn7d-z7t54:462/proxy/: tls qux (200; 19.952217ms)
Feb  4 15:13:14.005: INFO: (4) /api/v1/namespaces/proxy-6671/pods/https:proxy-service-5xn7d-z7t54:443/proxy/: test<... (200; 20.03727ms)
Feb  4 15:13:14.005: INFO: (4) /api/v1/namespaces/proxy-6671/services/https:proxy-service-5xn7d:tlsportname2/proxy/: tls qux (200; 20.72001ms)
Feb  4 15:13:14.007: INFO: (4) /api/v1/namespaces/proxy-6671/services/https:proxy-service-5xn7d:tlsportname1/proxy/: tls baz (200; 22.319358ms)
Feb  4 15:13:14.007: INFO: (4) /api/v1/namespaces/proxy-6671/pods/proxy-service-5xn7d-z7t54:162/proxy/: bar (200; 22.388831ms)
Feb  4 15:13:14.007: INFO: (4) /api/v1/namespaces/proxy-6671/pods/http:proxy-service-5xn7d-z7t54:162/proxy/: bar (200; 22.294924ms)
Feb  4 15:13:14.008: INFO: (4) /api/v1/namespaces/proxy-6671/services/http:proxy-service-5xn7d:portname1/proxy/: foo (200; 22.907419ms)
Feb  4 15:13:14.008: INFO: (4) /api/v1/namespaces/proxy-6671/pods/http:proxy-service-5xn7d-z7t54:1080/proxy/: ... (200; 22.960366ms)
Feb  4 15:13:14.008: INFO: (4) /api/v1/namespaces/proxy-6671/pods/proxy-service-5xn7d-z7t54:160/proxy/: foo (200; 23.25099ms)
Feb  4 15:13:14.008: INFO: (4) /api/v1/namespaces/proxy-6671/services/proxy-service-5xn7d:portname2/proxy/: bar (200; 23.383762ms)
Feb  4 15:13:14.008: INFO: (4) /api/v1/namespaces/proxy-6671/pods/https:proxy-service-5xn7d-z7t54:460/proxy/: tls baz (200; 23.225532ms)
Feb  4 15:13:14.037: INFO: (5) /api/v1/namespaces/proxy-6671/pods/https:proxy-service-5xn7d-z7t54:460/proxy/: tls baz (200; 28.080058ms)
Feb  4 15:13:14.038: INFO: (5) /api/v1/namespaces/proxy-6671/pods/http:proxy-service-5xn7d-z7t54:1080/proxy/: ... (200; 28.426077ms)
Feb  4 15:13:14.038: INFO: (5) /api/v1/namespaces/proxy-6671/pods/proxy-service-5xn7d-z7t54/proxy/: test (200; 28.645525ms)
Feb  4 15:13:14.038: INFO: (5) /api/v1/namespaces/proxy-6671/pods/proxy-service-5xn7d-z7t54:160/proxy/: foo (200; 29.124886ms)
Feb  4 15:13:14.038: INFO: (5) /api/v1/namespaces/proxy-6671/services/http:proxy-service-5xn7d:portname2/proxy/: bar (200; 28.757733ms)
Feb  4 15:13:14.038: INFO: (5) /api/v1/namespaces/proxy-6671/pods/proxy-service-5xn7d-z7t54:1080/proxy/: test<... (200; 29.054884ms)
Feb  4 15:13:14.038: INFO: (5) /api/v1/namespaces/proxy-6671/services/https:proxy-service-5xn7d:tlsportname1/proxy/: tls baz (200; 28.587034ms)
Feb  4 15:13:14.038: INFO: (5) /api/v1/namespaces/proxy-6671/pods/https:proxy-service-5xn7d-z7t54:443/proxy/: ... (200; 18.378876ms)
Feb  4 15:13:14.059: INFO: (6) /api/v1/namespaces/proxy-6671/pods/https:proxy-service-5xn7d-z7t54:462/proxy/: tls qux (200; 18.744439ms)
Feb  4 15:13:14.059: INFO: (6) /api/v1/namespaces/proxy-6671/services/https:proxy-service-5xn7d:tlsportname2/proxy/: tls qux (200; 18.852669ms)
Feb  4 15:13:14.059: INFO: (6) /api/v1/namespaces/proxy-6671/pods/proxy-service-5xn7d-z7t54:1080/proxy/: test<... (200; 18.763898ms)
Feb  4 15:13:14.059: INFO: (6) /api/v1/namespaces/proxy-6671/pods/proxy-service-5xn7d-z7t54:160/proxy/: foo (200; 19.347452ms)
Feb  4 15:13:14.059: INFO: (6) /api/v1/namespaces/proxy-6671/pods/proxy-service-5xn7d-z7t54/proxy/: test (200; 19.135126ms)
Feb  4 15:13:14.060: INFO: (6) /api/v1/namespaces/proxy-6671/services/http:proxy-service-5xn7d:portname2/proxy/: bar (200; 19.51666ms)
Feb  4 15:13:14.061: INFO: (6) /api/v1/namespaces/proxy-6671/services/http:proxy-service-5xn7d:portname1/proxy/: foo (200; 20.171503ms)
Feb  4 15:13:14.061: INFO: (6) /api/v1/namespaces/proxy-6671/services/proxy-service-5xn7d:portname1/proxy/: foo (200; 21.24528ms)
Feb  4 15:13:14.061: INFO: (6) /api/v1/namespaces/proxy-6671/services/proxy-service-5xn7d:portname2/proxy/: bar (200; 21.224884ms)
Feb  4 15:13:14.083: INFO: (7) /api/v1/namespaces/proxy-6671/pods/proxy-service-5xn7d-z7t54:160/proxy/: foo (200; 20.837228ms)
Feb  4 15:13:14.083: INFO: (7) /api/v1/namespaces/proxy-6671/pods/http:proxy-service-5xn7d-z7t54:160/proxy/: foo (200; 21.000026ms)
Feb  4 15:13:14.083: INFO: (7) /api/v1/namespaces/proxy-6671/pods/proxy-service-5xn7d-z7t54/proxy/: test (200; 20.928226ms)
Feb  4 15:13:14.083: INFO: (7) /api/v1/namespaces/proxy-6671/pods/https:proxy-service-5xn7d-z7t54:462/proxy/: tls qux (200; 21.297696ms)
Feb  4 15:13:14.084: INFO: (7) /api/v1/namespaces/proxy-6671/pods/http:proxy-service-5xn7d-z7t54:1080/proxy/: ... (200; 21.690764ms)
Feb  4 15:13:14.084: INFO: (7) /api/v1/namespaces/proxy-6671/pods/https:proxy-service-5xn7d-z7t54:443/proxy/: test<... (200; 22.047305ms)
Feb  4 15:13:14.084: INFO: (7) /api/v1/namespaces/proxy-6671/pods/proxy-service-5xn7d-z7t54:162/proxy/: bar (200; 22.456356ms)
Feb  4 15:13:14.084: INFO: (7) /api/v1/namespaces/proxy-6671/pods/https:proxy-service-5xn7d-z7t54:460/proxy/: tls baz (200; 22.170695ms)
Feb  4 15:13:14.085: INFO: (7) /api/v1/namespaces/proxy-6671/services/http:proxy-service-5xn7d:portname1/proxy/: foo (200; 22.401397ms)
Feb  4 15:13:14.085: INFO: (7) /api/v1/namespaces/proxy-6671/services/https:proxy-service-5xn7d:tlsportname1/proxy/: tls baz (200; 22.66684ms)
Feb  4 15:13:14.085: INFO: (7) /api/v1/namespaces/proxy-6671/pods/http:proxy-service-5xn7d-z7t54:162/proxy/: bar (200; 22.999678ms)
Feb  4 15:13:14.086: INFO: (7) /api/v1/namespaces/proxy-6671/services/proxy-service-5xn7d:portname2/proxy/: bar (200; 23.47642ms)
Feb  4 15:13:14.086: INFO: (7) /api/v1/namespaces/proxy-6671/services/proxy-service-5xn7d:portname1/proxy/: foo (200; 23.757007ms)
Feb  4 15:13:14.086: INFO: (7) /api/v1/namespaces/proxy-6671/services/http:proxy-service-5xn7d:portname2/proxy/: bar (200; 23.767134ms)
Feb  4 15:13:14.102: INFO: (8) /api/v1/namespaces/proxy-6671/pods/proxy-service-5xn7d-z7t54/proxy/: test (200; 15.293431ms)
Feb  4 15:13:14.102: INFO: (8) /api/v1/namespaces/proxy-6671/pods/http:proxy-service-5xn7d-z7t54:1080/proxy/: ... (200; 15.176571ms)
Feb  4 15:13:14.102: INFO: (8) /api/v1/namespaces/proxy-6671/pods/http:proxy-service-5xn7d-z7t54:162/proxy/: bar (200; 15.106448ms)
Feb  4 15:13:14.103: INFO: (8) /api/v1/namespaces/proxy-6671/pods/https:proxy-service-5xn7d-z7t54:460/proxy/: tls baz (200; 16.62169ms)
Feb  4 15:13:14.104: INFO: (8) /api/v1/namespaces/proxy-6671/pods/https:proxy-service-5xn7d-z7t54:462/proxy/: tls qux (200; 17.031715ms)
Feb  4 15:13:14.104: INFO: (8) /api/v1/namespaces/proxy-6671/pods/https:proxy-service-5xn7d-z7t54:443/proxy/: test<... (200; 17.514744ms)
Feb  4 15:13:14.104: INFO: (8) /api/v1/namespaces/proxy-6671/pods/http:proxy-service-5xn7d-z7t54:160/proxy/: foo (200; 17.720128ms)
Feb  4 15:13:14.104: INFO: (8) /api/v1/namespaces/proxy-6671/pods/proxy-service-5xn7d-z7t54:162/proxy/: bar (200; 17.496966ms)
Feb  4 15:13:14.105: INFO: (8) /api/v1/namespaces/proxy-6671/services/http:proxy-service-5xn7d:portname2/proxy/: bar (200; 18.549117ms)
Feb  4 15:13:14.105: INFO: (8) /api/v1/namespaces/proxy-6671/services/http:proxy-service-5xn7d:portname1/proxy/: foo (200; 18.590871ms)
Feb  4 15:13:14.106: INFO: (8) /api/v1/namespaces/proxy-6671/services/https:proxy-service-5xn7d:tlsportname1/proxy/: tls baz (200; 19.758363ms)
Feb  4 15:13:14.106: INFO: (8) /api/v1/namespaces/proxy-6671/services/proxy-service-5xn7d:portname2/proxy/: bar (200; 19.845349ms)
Feb  4 15:13:14.107: INFO: (8) /api/v1/namespaces/proxy-6671/services/proxy-service-5xn7d:portname1/proxy/: foo (200; 20.486064ms)
Feb  4 15:13:14.107: INFO: (8) /api/v1/namespaces/proxy-6671/services/https:proxy-service-5xn7d:tlsportname2/proxy/: tls qux (200; 20.612864ms)
Feb  4 15:13:14.117: INFO: (9) /api/v1/namespaces/proxy-6671/pods/https:proxy-service-5xn7d-z7t54:460/proxy/: tls baz (200; 9.366781ms)
Feb  4 15:13:14.119: INFO: (9) /api/v1/namespaces/proxy-6671/pods/http:proxy-service-5xn7d-z7t54:1080/proxy/: ... (200; 11.179851ms)
Feb  4 15:13:14.121: INFO: (9) /api/v1/namespaces/proxy-6671/services/http:proxy-service-5xn7d:portname2/proxy/: bar (200; 13.827195ms)
Feb  4 15:13:14.121: INFO: (9) /api/v1/namespaces/proxy-6671/pods/proxy-service-5xn7d-z7t54:160/proxy/: foo (200; 13.903347ms)
Feb  4 15:13:14.122: INFO: (9) /api/v1/namespaces/proxy-6671/services/http:proxy-service-5xn7d:portname1/proxy/: foo (200; 14.909595ms)
Feb  4 15:13:14.122: INFO: (9) /api/v1/namespaces/proxy-6671/services/https:proxy-service-5xn7d:tlsportname1/proxy/: tls baz (200; 14.775297ms)
Feb  4 15:13:14.123: INFO: (9) /api/v1/namespaces/proxy-6671/pods/http:proxy-service-5xn7d-z7t54:160/proxy/: foo (200; 15.105387ms)
Feb  4 15:13:14.123: INFO: (9) /api/v1/namespaces/proxy-6671/pods/proxy-service-5xn7d-z7t54/proxy/: test (200; 15.390589ms)
Feb  4 15:13:14.123: INFO: (9) /api/v1/namespaces/proxy-6671/pods/http:proxy-service-5xn7d-z7t54:162/proxy/: bar (200; 15.561439ms)
Feb  4 15:13:14.124: INFO: (9) /api/v1/namespaces/proxy-6671/services/https:proxy-service-5xn7d:tlsportname2/proxy/: tls qux (200; 16.045293ms)
Feb  4 15:13:14.124: INFO: (9) /api/v1/namespaces/proxy-6671/pods/proxy-service-5xn7d-z7t54:1080/proxy/: test<... (200; 15.881598ms)
Feb  4 15:13:14.124: INFO: (9) /api/v1/namespaces/proxy-6671/services/proxy-service-5xn7d:portname2/proxy/: bar (200; 16.46266ms)
Feb  4 15:13:14.124: INFO: (9) /api/v1/namespaces/proxy-6671/pods/proxy-service-5xn7d-z7t54:162/proxy/: bar (200; 16.569904ms)
Feb  4 15:13:14.124: INFO: (9) /api/v1/namespaces/proxy-6671/pods/https:proxy-service-5xn7d-z7t54:462/proxy/: tls qux (200; 16.22619ms)
Feb  4 15:13:14.124: INFO: (9) /api/v1/namespaces/proxy-6671/pods/https:proxy-service-5xn7d-z7t54:443/proxy/: test<... (200; 10.778764ms)
Feb  4 15:13:14.136: INFO: (10) /api/v1/namespaces/proxy-6671/pods/https:proxy-service-5xn7d-z7t54:462/proxy/: tls qux (200; 10.975036ms)
Feb  4 15:13:14.142: INFO: (10) /api/v1/namespaces/proxy-6671/pods/proxy-service-5xn7d-z7t54/proxy/: test (200; 16.790583ms)
Feb  4 15:13:14.142: INFO: (10) /api/v1/namespaces/proxy-6671/services/http:proxy-service-5xn7d:portname1/proxy/: foo (200; 16.938529ms)
Feb  4 15:13:14.142: INFO: (10) /api/v1/namespaces/proxy-6671/pods/http:proxy-service-5xn7d-z7t54:160/proxy/: foo (200; 17.570896ms)
Feb  4 15:13:14.143: INFO: (10) /api/v1/namespaces/proxy-6671/pods/http:proxy-service-5xn7d-z7t54:1080/proxy/: ... (200; 17.727976ms)
Feb  4 15:13:14.143: INFO: (10) /api/v1/namespaces/proxy-6671/pods/proxy-service-5xn7d-z7t54:162/proxy/: bar (200; 18.101745ms)
Feb  4 15:13:14.143: INFO: (10) /api/v1/namespaces/proxy-6671/pods/https:proxy-service-5xn7d-z7t54:460/proxy/: tls baz (200; 17.985259ms)
Feb  4 15:13:14.143: INFO: (10) /api/v1/namespaces/proxy-6671/pods/https:proxy-service-5xn7d-z7t54:443/proxy/: ... (200; 14.693207ms)
Feb  4 15:13:14.168: INFO: (11) /api/v1/namespaces/proxy-6671/services/https:proxy-service-5xn7d:tlsportname2/proxy/: tls qux (200; 14.958737ms)
Feb  4 15:13:14.169: INFO: (11) /api/v1/namespaces/proxy-6671/pods/proxy-service-5xn7d-z7t54:1080/proxy/: test<... (200; 14.888389ms)
Feb  4 15:13:14.169: INFO: (11) /api/v1/namespaces/proxy-6671/pods/http:proxy-service-5xn7d-z7t54:160/proxy/: foo (200; 15.893235ms)
Feb  4 15:13:14.169: INFO: (11) /api/v1/namespaces/proxy-6671/pods/proxy-service-5xn7d-z7t54/proxy/: test (200; 16.254606ms)
Feb  4 15:13:14.170: INFO: (11) /api/v1/namespaces/proxy-6671/services/proxy-service-5xn7d:portname1/proxy/: foo (200; 17.311973ms)
Feb  4 15:13:14.170: INFO: (11) /api/v1/namespaces/proxy-6671/services/proxy-service-5xn7d:portname2/proxy/: bar (200; 17.301504ms)
Feb  4 15:13:14.171: INFO: (11) /api/v1/namespaces/proxy-6671/pods/proxy-service-5xn7d-z7t54:160/proxy/: foo (200; 17.877458ms)
Feb  4 15:13:14.183: INFO: (12) /api/v1/namespaces/proxy-6671/pods/http:proxy-service-5xn7d-z7t54:162/proxy/: bar (200; 12.169132ms)
Feb  4 15:13:14.183: INFO: (12) /api/v1/namespaces/proxy-6671/services/http:proxy-service-5xn7d:portname1/proxy/: foo (200; 12.150416ms)
Feb  4 15:13:14.183: INFO: (12) /api/v1/namespaces/proxy-6671/services/https:proxy-service-5xn7d:tlsportname2/proxy/: tls qux (200; 12.309028ms)
Feb  4 15:13:14.184: INFO: (12) /api/v1/namespaces/proxy-6671/services/http:proxy-service-5xn7d:portname2/proxy/: bar (200; 13.08089ms)
Feb  4 15:13:14.184: INFO: (12) /api/v1/namespaces/proxy-6671/pods/proxy-service-5xn7d-z7t54:1080/proxy/: test<... (200; 13.327547ms)
Feb  4 15:13:14.185: INFO: (12) /api/v1/namespaces/proxy-6671/pods/proxy-service-5xn7d-z7t54:162/proxy/: bar (200; 13.869851ms)
Feb  4 15:13:14.185: INFO: (12) /api/v1/namespaces/proxy-6671/pods/https:proxy-service-5xn7d-z7t54:460/proxy/: tls baz (200; 14.427647ms)
Feb  4 15:13:14.186: INFO: (12) /api/v1/namespaces/proxy-6671/pods/proxy-service-5xn7d-z7t54/proxy/: test (200; 14.559312ms)
Feb  4 15:13:14.186: INFO: (12) /api/v1/namespaces/proxy-6671/pods/proxy-service-5xn7d-z7t54:160/proxy/: foo (200; 15.150635ms)
Feb  4 15:13:14.186: INFO: (12) /api/v1/namespaces/proxy-6671/services/https:proxy-service-5xn7d:tlsportname1/proxy/: tls baz (200; 14.987498ms)
Feb  4 15:13:14.186: INFO: (12) /api/v1/namespaces/proxy-6671/services/proxy-service-5xn7d:portname2/proxy/: bar (200; 15.005837ms)
Feb  4 15:13:14.187: INFO: (12) /api/v1/namespaces/proxy-6671/pods/https:proxy-service-5xn7d-z7t54:462/proxy/: tls qux (200; 15.743307ms)
Feb  4 15:13:14.187: INFO: (12) /api/v1/namespaces/proxy-6671/pods/http:proxy-service-5xn7d-z7t54:1080/proxy/: ... (200; 15.742393ms)
Feb  4 15:13:14.187: INFO: (12) /api/v1/namespaces/proxy-6671/pods/http:proxy-service-5xn7d-z7t54:160/proxy/: foo (200; 16.059154ms)
Feb  4 15:13:14.187: INFO: (12) /api/v1/namespaces/proxy-6671/pods/https:proxy-service-5xn7d-z7t54:443/proxy/: ... (200; 12.398301ms)
Feb  4 15:13:14.200: INFO: (13) /api/v1/namespaces/proxy-6671/pods/http:proxy-service-5xn7d-z7t54:162/proxy/: bar (200; 12.854591ms)
Feb  4 15:13:14.200: INFO: (13) /api/v1/namespaces/proxy-6671/pods/http:proxy-service-5xn7d-z7t54:160/proxy/: foo (200; 12.975941ms)
Feb  4 15:13:14.201: INFO: (13) /api/v1/namespaces/proxy-6671/pods/proxy-service-5xn7d-z7t54/proxy/: test (200; 13.618434ms)
Feb  4 15:13:14.201: INFO: (13) /api/v1/namespaces/proxy-6671/pods/proxy-service-5xn7d-z7t54:1080/proxy/: test<... (200; 13.72329ms)
Feb  4 15:13:14.202: INFO: (13) /api/v1/namespaces/proxy-6671/pods/https:proxy-service-5xn7d-z7t54:443/proxy/: test (200; 14.263436ms)
Feb  4 15:13:14.218: INFO: (14) /api/v1/namespaces/proxy-6671/services/https:proxy-service-5xn7d:tlsportname1/proxy/: tls baz (200; 15.558009ms)
Feb  4 15:13:14.219: INFO: (14) /api/v1/namespaces/proxy-6671/pods/http:proxy-service-5xn7d-z7t54:1080/proxy/: ... (200; 16.496408ms)
Feb  4 15:13:14.219: INFO: (14) /api/v1/namespaces/proxy-6671/pods/http:proxy-service-5xn7d-z7t54:162/proxy/: bar (200; 16.643366ms)
Feb  4 15:13:14.219: INFO: (14) /api/v1/namespaces/proxy-6671/pods/proxy-service-5xn7d-z7t54:1080/proxy/: test<... (200; 16.75682ms)
Feb  4 15:13:14.219: INFO: (14) /api/v1/namespaces/proxy-6671/services/https:proxy-service-5xn7d:tlsportname2/proxy/: tls qux (200; 16.792869ms)
Feb  4 15:13:14.220: INFO: (14) /api/v1/namespaces/proxy-6671/services/proxy-service-5xn7d:portname1/proxy/: foo (200; 16.821737ms)
Feb  4 15:13:14.220: INFO: (14) /api/v1/namespaces/proxy-6671/services/proxy-service-5xn7d:portname2/proxy/: bar (200; 16.973705ms)
Feb  4 15:13:14.220: INFO: (14) /api/v1/namespaces/proxy-6671/services/http:proxy-service-5xn7d:portname2/proxy/: bar (200; 16.810429ms)
Feb  4 15:13:14.220: INFO: (14) /api/v1/namespaces/proxy-6671/pods/https:proxy-service-5xn7d-z7t54:462/proxy/: tls qux (200; 17.077257ms)
Feb  4 15:13:14.220: INFO: (14) /api/v1/namespaces/proxy-6671/pods/http:proxy-service-5xn7d-z7t54:160/proxy/: foo (200; 17.086604ms)
Feb  4 15:13:14.233: INFO: (15) /api/v1/namespaces/proxy-6671/pods/proxy-service-5xn7d-z7t54:162/proxy/: bar (200; 13.150376ms)
Feb  4 15:13:14.234: INFO: (15) /api/v1/namespaces/proxy-6671/pods/https:proxy-service-5xn7d-z7t54:462/proxy/: tls qux (200; 13.848854ms)
Feb  4 15:13:14.234: INFO: (15) /api/v1/namespaces/proxy-6671/services/proxy-service-5xn7d:portname1/proxy/: foo (200; 14.29376ms)
Feb  4 15:13:14.235: INFO: (15) /api/v1/namespaces/proxy-6671/services/proxy-service-5xn7d:portname2/proxy/: bar (200; 14.356519ms)
Feb  4 15:13:14.235: INFO: (15) /api/v1/namespaces/proxy-6671/pods/http:proxy-service-5xn7d-z7t54:162/proxy/: bar (200; 14.442675ms)
Feb  4 15:13:14.235: INFO: (15) /api/v1/namespaces/proxy-6671/pods/proxy-service-5xn7d-z7t54/proxy/: test (200; 14.688851ms)
Feb  4 15:13:14.235: INFO: (15) /api/v1/namespaces/proxy-6671/pods/https:proxy-service-5xn7d-z7t54:460/proxy/: tls baz (200; 14.519129ms)
Feb  4 15:13:14.235: INFO: (15) /api/v1/namespaces/proxy-6671/services/http:proxy-service-5xn7d:portname1/proxy/: foo (200; 14.644241ms)
Feb  4 15:13:14.235: INFO: (15) /api/v1/namespaces/proxy-6671/pods/proxy-service-5xn7d-z7t54:1080/proxy/: test<... (200; 15.028772ms)
Feb  4 15:13:14.235: INFO: (15) /api/v1/namespaces/proxy-6671/services/https:proxy-service-5xn7d:tlsportname1/proxy/: tls baz (200; 15.125742ms)
Feb  4 15:13:14.235: INFO: (15) /api/v1/namespaces/proxy-6671/pods/http:proxy-service-5xn7d-z7t54:1080/proxy/: ... (200; 15.24135ms)
Feb  4 15:13:14.235: INFO: (15) /api/v1/namespaces/proxy-6671/pods/http:proxy-service-5xn7d-z7t54:160/proxy/: foo (200; 15.422151ms)
Feb  4 15:13:14.235: INFO: (15) /api/v1/namespaces/proxy-6671/pods/proxy-service-5xn7d-z7t54:160/proxy/: foo (200; 15.404507ms)
Feb  4 15:13:14.236: INFO: (15) /api/v1/namespaces/proxy-6671/services/https:proxy-service-5xn7d:tlsportname2/proxy/: tls qux (200; 16.151359ms)
Feb  4 15:13:14.237: INFO: (15) /api/v1/namespaces/proxy-6671/services/http:proxy-service-5xn7d:portname2/proxy/: bar (200; 16.839603ms)
Feb  4 15:13:14.237: INFO: (15) /api/v1/namespaces/proxy-6671/pods/https:proxy-service-5xn7d-z7t54:443/proxy/: ... (200; 3.631921ms)
Feb  4 15:13:14.243: INFO: (16) /api/v1/namespaces/proxy-6671/pods/http:proxy-service-5xn7d-z7t54:160/proxy/: foo (200; 5.981157ms)
Feb  4 15:13:14.243: INFO: (16) /api/v1/namespaces/proxy-6671/pods/http:proxy-service-5xn7d-z7t54:162/proxy/: bar (200; 6.042639ms)
Feb  4 15:13:14.244: INFO: (16) /api/v1/namespaces/proxy-6671/services/http:proxy-service-5xn7d:portname2/proxy/: bar (200; 6.883167ms)
Feb  4 15:13:14.246: INFO: (16) /api/v1/namespaces/proxy-6671/pods/https:proxy-service-5xn7d-z7t54:443/proxy/: test<... (200; 9.15504ms)
Feb  4 15:13:14.247: INFO: (16) /api/v1/namespaces/proxy-6671/pods/proxy-service-5xn7d-z7t54/proxy/: test (200; 9.382547ms)
Feb  4 15:13:14.247: INFO: (16) /api/v1/namespaces/proxy-6671/services/https:proxy-service-5xn7d:tlsportname2/proxy/: tls qux (200; 9.636349ms)
Feb  4 15:13:14.247: INFO: (16) /api/v1/namespaces/proxy-6671/pods/https:proxy-service-5xn7d-z7t54:460/proxy/: tls baz (200; 9.747904ms)
Feb  4 15:13:14.247: INFO: (16) /api/v1/namespaces/proxy-6671/pods/https:proxy-service-5xn7d-z7t54:462/proxy/: tls qux (200; 10.063715ms)
Feb  4 15:13:14.249: INFO: (16) /api/v1/namespaces/proxy-6671/services/proxy-service-5xn7d:portname1/proxy/: foo (200; 11.246074ms)
Feb  4 15:13:14.249: INFO: (16) /api/v1/namespaces/proxy-6671/services/https:proxy-service-5xn7d:tlsportname1/proxy/: tls baz (200; 11.234077ms)
Feb  4 15:13:14.249: INFO: (16) /api/v1/namespaces/proxy-6671/services/http:proxy-service-5xn7d:portname1/proxy/: foo (200; 11.380354ms)
Feb  4 15:13:14.249: INFO: (16) /api/v1/namespaces/proxy-6671/pods/proxy-service-5xn7d-z7t54:162/proxy/: bar (200; 11.25856ms)
Feb  4 15:13:14.249: INFO: (16) /api/v1/namespaces/proxy-6671/services/proxy-service-5xn7d:portname2/proxy/: bar (200; 11.442174ms)
Feb  4 15:13:14.252: INFO: (17) /api/v1/namespaces/proxy-6671/pods/http:proxy-service-5xn7d-z7t54:162/proxy/: bar (200; 3.452016ms)
Feb  4 15:13:14.253: INFO: (17) /api/v1/namespaces/proxy-6671/pods/proxy-service-5xn7d-z7t54:1080/proxy/: test<... (200; 3.803131ms)
Feb  4 15:13:14.256: INFO: (17) /api/v1/namespaces/proxy-6671/pods/https:proxy-service-5xn7d-z7t54:462/proxy/: tls qux (200; 6.873977ms)
Feb  4 15:13:14.256: INFO: (17) /api/v1/namespaces/proxy-6671/pods/proxy-service-5xn7d-z7t54/proxy/: test (200; 7.204756ms)
Feb  4 15:13:14.256: INFO: (17) /api/v1/namespaces/proxy-6671/pods/https:proxy-service-5xn7d-z7t54:460/proxy/: tls baz (200; 7.286394ms)
Feb  4 15:13:14.257: INFO: (17) /api/v1/namespaces/proxy-6671/pods/https:proxy-service-5xn7d-z7t54:443/proxy/: ... (200; 9.600849ms)
Feb  4 15:13:14.259: INFO: (17) /api/v1/namespaces/proxy-6671/pods/proxy-service-5xn7d-z7t54:160/proxy/: foo (200; 9.628814ms)
Feb  4 15:13:14.259: INFO: (17) /api/v1/namespaces/proxy-6671/services/https:proxy-service-5xn7d:tlsportname2/proxy/: tls qux (200; 9.712303ms)
Feb  4 15:13:14.259: INFO: (17) /api/v1/namespaces/proxy-6671/services/http:proxy-service-5xn7d:portname1/proxy/: foo (200; 9.803584ms)
Feb  4 15:13:14.259: INFO: (17) /api/v1/namespaces/proxy-6671/services/https:proxy-service-5xn7d:tlsportname1/proxy/: tls baz (200; 9.960973ms)
Feb  4 15:13:14.259: INFO: (17) /api/v1/namespaces/proxy-6671/services/proxy-service-5xn7d:portname1/proxy/: foo (200; 10.380399ms)
Feb  4 15:13:14.259: INFO: (17) /api/v1/namespaces/proxy-6671/services/proxy-service-5xn7d:portname2/proxy/: bar (200; 10.39241ms)
Feb  4 15:13:14.267: INFO: (18) /api/v1/namespaces/proxy-6671/pods/proxy-service-5xn7d-z7t54:160/proxy/: foo (200; 7.875401ms)
Feb  4 15:13:14.267: INFO: (18) /api/v1/namespaces/proxy-6671/pods/http:proxy-service-5xn7d-z7t54:160/proxy/: foo (200; 7.911966ms)
Feb  4 15:13:14.267: INFO: (18) /api/v1/namespaces/proxy-6671/pods/proxy-service-5xn7d-z7t54:1080/proxy/: test<... (200; 7.976088ms)
Feb  4 15:13:14.270: INFO: (18) /api/v1/namespaces/proxy-6671/services/proxy-service-5xn7d:portname1/proxy/: foo (200; 10.311398ms)
Feb  4 15:13:14.272: INFO: (18) /api/v1/namespaces/proxy-6671/services/https:proxy-service-5xn7d:tlsportname1/proxy/: tls baz (200; 12.292886ms)
Feb  4 15:13:14.272: INFO: (18) /api/v1/namespaces/proxy-6671/pods/http:proxy-service-5xn7d-z7t54:162/proxy/: bar (200; 12.320819ms)
Feb  4 15:13:14.272: INFO: (18) /api/v1/namespaces/proxy-6671/pods/https:proxy-service-5xn7d-z7t54:462/proxy/: tls qux (200; 12.369891ms)
Feb  4 15:13:14.274: INFO: (18) /api/v1/namespaces/proxy-6671/services/http:proxy-service-5xn7d:portname2/proxy/: bar (200; 14.847859ms)
Feb  4 15:13:14.274: INFO: (18) /api/v1/namespaces/proxy-6671/pods/https:proxy-service-5xn7d-z7t54:443/proxy/: test (200; 15.855775ms)
Feb  4 15:13:14.276: INFO: (18) /api/v1/namespaces/proxy-6671/services/proxy-service-5xn7d:portname2/proxy/: bar (200; 15.98629ms)
Feb  4 15:13:14.276: INFO: (18) /api/v1/namespaces/proxy-6671/pods/https:proxy-service-5xn7d-z7t54:460/proxy/: tls baz (200; 16.344707ms)
Feb  4 15:13:14.276: INFO: (18) /api/v1/namespaces/proxy-6671/pods/http:proxy-service-5xn7d-z7t54:1080/proxy/: ... (200; 16.514934ms)
Feb  4 15:13:14.284: INFO: (19) /api/v1/namespaces/proxy-6671/pods/http:proxy-service-5xn7d-z7t54:1080/proxy/: ... (200; 7.21436ms)
Feb  4 15:13:14.284: INFO: (19) /api/v1/namespaces/proxy-6671/pods/proxy-service-5xn7d-z7t54:1080/proxy/: test<... (200; 8.023306ms)
Feb  4 15:13:14.284: INFO: (19) /api/v1/namespaces/proxy-6671/pods/proxy-service-5xn7d-z7t54:160/proxy/: foo (200; 8.253488ms)
Feb  4 15:13:14.284: INFO: (19) /api/v1/namespaces/proxy-6671/pods/proxy-service-5xn7d-z7t54:162/proxy/: bar (200; 8.280684ms)
Feb  4 15:13:14.285: INFO: (19) /api/v1/namespaces/proxy-6671/services/https:proxy-service-5xn7d:tlsportname2/proxy/: tls qux (200; 9.107833ms)
Feb  4 15:13:14.286: INFO: (19) /api/v1/namespaces/proxy-6671/pods/https:proxy-service-5xn7d-z7t54:443/proxy/: test (200; 11.149727ms)
Feb  4 15:13:14.295: INFO: (19) /api/v1/namespaces/proxy-6671/services/http:proxy-service-5xn7d:portname1/proxy/: foo (200; 18.362638ms)
Feb  4 15:13:14.295: INFO: (19) /api/v1/namespaces/proxy-6671/pods/http:proxy-service-5xn7d-z7t54:162/proxy/: bar (200; 18.336279ms)
Feb  4 15:13:14.295: INFO: (19) /api/v1/namespaces/proxy-6671/services/proxy-service-5xn7d:portname2/proxy/: bar (200; 18.851536ms)
Feb  4 15:13:14.295: INFO: (19) /api/v1/namespaces/proxy-6671/services/https:proxy-service-5xn7d:tlsportname1/proxy/: tls baz (200; 18.891819ms)
Feb  4 15:13:14.296: INFO: (19) /api/v1/namespaces/proxy-6671/services/proxy-service-5xn7d:portname1/proxy/: foo (200; 19.301669ms)
STEP: deleting ReplicationController proxy-service-5xn7d in namespace proxy-6671, will wait for the garbage collector to delete the pods
Feb  4 15:13:14.361: INFO: Deleting ReplicationController proxy-service-5xn7d took: 11.174324ms
Feb  4 15:13:14.661: INFO: Terminating ReplicationController proxy-service-5xn7d pods took: 300.843115ms
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 15:13:26.563: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-6671" for this suite.
Feb  4 15:13:32.639: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 15:13:32.824: INFO: namespace proxy-6671 deletion completed in 6.239809484s

• [SLOW TEST:36.495 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy through a service and a pod  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl version 
  should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  4 15:13:32.825: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb  4 15:13:32.940: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version'
Feb  4 15:13:33.109: INFO: stderr: ""
Feb  4 15:13:33.109: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.7\", GitCommit:\"6c143d35bb11d74970e7bc0b6c45b6bfdffc0bd4\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T16:55:20Z\", GoVersion:\"go1.12.14\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.1\", GitCommit:\"4485c6f18cee9a5d3c3b4e523bd27972b1b53892\", GitTreeState:\"clean\", BuildDate:\"2019-07-18T09:09:21Z\", GoVersion:\"go1.12.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 15:13:33.110: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5579" for this suite.
Feb  4 15:13:39.198: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 15:13:39.304: INFO: namespace kubectl-5579 deletion completed in 6.185492954s

• [SLOW TEST:6.479 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl version
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check is all data is printed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  4 15:13:39.305: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb  4 15:13:39.437: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Feb  4 15:13:44.450: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Feb  4 15:13:48.467: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Feb  4 15:13:48.527: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:deployment-7910,SelfLink:/apis/apps/v1/namespaces/deployment-7910/deployments/test-cleanup-deployment,UID:919e2191-e5c2-4d27-9d25-5c452069d646,ResourceVersion:23085295,Generation:1,CreationTimestamp:2020-02-04 15:13:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},}

Feb  4 15:13:48.534: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil.
Feb  4 15:13:48.534: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment":
Feb  4 15:13:48.535: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:deployment-7910,SelfLink:/apis/apps/v1/namespaces/deployment-7910/replicasets/test-cleanup-controller,UID:f107ace0-d332-45aa-a1a5-868d74f4ccbb,ResourceVersion:23085296,Generation:1,CreationTimestamp:2020-02-04 15:13:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 919e2191-e5c2-4d27-9d25-5c452069d646 0xc002c0eac7 0xc002c0eac8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Feb  4 15:13:48.674: INFO: Pod "test-cleanup-controller-sttqp" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-sttqp,GenerateName:test-cleanup-controller-,Namespace:deployment-7910,SelfLink:/api/v1/namespaces/deployment-7910/pods/test-cleanup-controller-sttqp,UID:8aa6fe6a-d2a3-43d2-8611-54ccbfa69350,ResourceVersion:23085293,Generation:0,CreationTimestamp:2020-02-04 15:13:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller f107ace0-d332-45aa-a1a5-868d74f4ccbb 0xc002c0f1c7 0xc002c0f1c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-xkdjj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xkdjj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-xkdjj true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002c0f240} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002c0f260}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 15:13:39 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 15:13:48 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 15:13:48 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-04 15:13:39 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2020-02-04 15:13:39 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-04 15:13:46 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://243aeea4ca0d7207abaddf216b55bc09fff8717b1c280019d2b93cc6c7b54d96}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 15:13:48.674: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-7910" for this suite.
Feb  4 15:13:56.843: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  4 15:13:57.004: INFO: namespace deployment-7910 deletion completed in 8.265275885s

• [SLOW TEST:17.699 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSFeb  4 15:13:57.004: INFO: Running AfterSuite actions on all nodes
Feb  4 15:13:57.005: INFO: Running AfterSuite actions on node 1
Feb  4 15:13:57.005: INFO: Skipping dumping logs from cluster

Ran 215 of 4412 Specs in 8262.201 seconds
SUCCESS! -- 215 Passed | 0 Failed | 0 Pending | 4197 Skipped
PASS