I0203 12:56:11.615166 8 e2e.go:243] Starting e2e run "93fa20d9-20dc-492b-beab-ce2fdbf52e63" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1580734570 - Will randomize all specs Will run 215 of 4412 specs Feb 3 12:56:11.965: INFO: >>> kubeConfig: /root/.kube/config Feb 3 12:56:11.969: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Feb 3 12:56:11.999: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Feb 3 12:56:12.035: INFO: 10 / 10 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Feb 3 12:56:12.035: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Feb 3 12:56:12.035: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Feb 3 12:56:12.050: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Feb 3 12:56:12.050: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Feb 3 12:56:12.050: INFO: e2e test version: v1.15.7 Feb 3 12:56:12.053: INFO: kube-apiserver version: v1.15.1 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 3 12:56:12.054: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap Feb 3 12:56:12.231: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-477a2d64-baa1-424f-aeb8-62e1e15e6e1e STEP: Creating a pod to test consume configMaps Feb 3 12:56:12.281: INFO: Waiting up to 5m0s for pod "pod-configmaps-1511fbf1-536d-40c8-82a9-0376227cc177" in namespace "configmap-4367" to be "success or failure" Feb 3 12:56:12.352: INFO: Pod "pod-configmaps-1511fbf1-536d-40c8-82a9-0376227cc177": Phase="Pending", Reason="", readiness=false. Elapsed: 70.509916ms Feb 3 12:56:14.360: INFO: Pod "pod-configmaps-1511fbf1-536d-40c8-82a9-0376227cc177": Phase="Pending", Reason="", readiness=false. Elapsed: 2.079096567s Feb 3 12:56:16.370: INFO: Pod "pod-configmaps-1511fbf1-536d-40c8-82a9-0376227cc177": Phase="Pending", Reason="", readiness=false. Elapsed: 4.089427695s Feb 3 12:56:18.380: INFO: Pod "pod-configmaps-1511fbf1-536d-40c8-82a9-0376227cc177": Phase="Pending", Reason="", readiness=false. Elapsed: 6.099039236s Feb 3 12:56:20.391: INFO: Pod "pod-configmaps-1511fbf1-536d-40c8-82a9-0376227cc177": Phase="Pending", Reason="", readiness=false. Elapsed: 8.110031429s Feb 3 12:56:22.410: INFO: Pod "pod-configmaps-1511fbf1-536d-40c8-82a9-0376227cc177": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.129285648s STEP: Saw pod success Feb 3 12:56:22.410: INFO: Pod "pod-configmaps-1511fbf1-536d-40c8-82a9-0376227cc177" satisfied condition "success or failure" Feb 3 12:56:22.415: INFO: Trying to get logs from node iruya-node pod pod-configmaps-1511fbf1-536d-40c8-82a9-0376227cc177 container configmap-volume-test: STEP: delete the pod Feb 3 12:56:22.547: INFO: Waiting for pod pod-configmaps-1511fbf1-536d-40c8-82a9-0376227cc177 to disappear Feb 3 12:56:22.552: INFO: Pod pod-configmaps-1511fbf1-536d-40c8-82a9-0376227cc177 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 3 12:56:22.553: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4367" for this suite. Feb 3 12:56:28.587: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 3 12:56:28.714: INFO: namespace configmap-4367 deletion completed in 6.153745813s • [SLOW TEST:16.660 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 3 12:56:28.714: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Feb 3 12:56:28.797: INFO: Pod name rollover-pod: Found 0 pods out of 1 Feb 3 12:56:33.827: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Feb 3 12:56:37.843: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Feb 3 12:56:39.855: INFO: Creating deployment "test-rollover-deployment" Feb 3 12:56:39.894: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Feb 3 12:56:42.220: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Feb 3 12:56:42.243: INFO: Ensure that both replica sets have 1 created replica Feb 3 12:56:42.284: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Feb 3 12:56:42.356: INFO: Updating deployment test-rollover-deployment Feb 3 12:56:42.357: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Feb 3 12:56:44.389: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Feb 3 12:56:44.399: INFO: Make sure deployment "test-rollover-deployment" is complete Feb 3 12:56:44.405: INFO: all replica sets need to contain the pod-template-hash label Feb 3 12:56:44.405: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716331399, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716331399, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716331402, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716331399, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 3 12:56:46.424: INFO: all replica sets need to contain the pod-template-hash label Feb 3 12:56:46.424: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716331399, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716331399, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716331402, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716331399, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 3 12:56:48.421: INFO: all replica sets need to contain the pod-template-hash label Feb 3 12:56:48.421: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716331399, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716331399, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716331402, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716331399, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 3 12:56:50.426: INFO: all replica sets need to contain the pod-template-hash label Feb 3 12:56:50.426: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716331399, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716331399, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716331402, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716331399, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 3 12:56:52.469: INFO: all replica sets need to contain the pod-template-hash label Feb 3 12:56:52.469: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716331399, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716331399, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716331412, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716331399, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 3 12:56:54.422: INFO: all replica sets need to contain the pod-template-hash label Feb 3 12:56:54.422: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716331399, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716331399, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716331412, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716331399, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 3 12:56:56.439: INFO: all replica sets need to contain the pod-template-hash label Feb 3 12:56:56.439: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716331399, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716331399, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716331412, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716331399, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 3 12:56:58.422: INFO: all replica sets need to contain the pod-template-hash label Feb 3 12:56:58.422: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716331399, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716331399, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716331412, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716331399, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 3 12:57:00.422: INFO: all replica sets need to contain the pod-template-hash label Feb 3 12:57:00.422: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716331399, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716331399, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716331412, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716331399, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 3 12:57:02.425: INFO: all replica sets need to contain the pod-template-hash label Feb 3 12:57:02.426: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716331399, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716331399, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716331412, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716331399, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 3 12:57:04.434: INFO: Feb 3 12:57:04.434: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Feb 3 12:57:04.449: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:deployment-8609,SelfLink:/apis/apps/v1/namespaces/deployment-8609/deployments/test-rollover-deployment,UID:143d99b3-e3fd-416e-93f7-85e0a4864574,ResourceVersion:22937822,Generation:2,CreationTimestamp:2020-02-03 12:56:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-02-03 12:56:39 +0000 UTC 2020-02-03 12:56:39 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-02-03 12:57:02 +0000 UTC 2020-02-03 12:56:39 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-854595fc44" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Feb 3 12:57:04.457: INFO: New ReplicaSet "test-rollover-deployment-854595fc44" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44,GenerateName:,Namespace:deployment-8609,SelfLink:/apis/apps/v1/namespaces/deployment-8609/replicasets/test-rollover-deployment-854595fc44,UID:634b89f9-ea2f-4424-a489-f52a03cdfd37,ResourceVersion:22937813,Generation:2,CreationTimestamp:2020-02-03 12:56:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 143d99b3-e3fd-416e-93f7-85e0a4864574 0xc0023d0727 0xc0023d0728}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Feb 3 12:57:04.457: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Feb 3 12:57:04.457: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:deployment-8609,SelfLink:/apis/apps/v1/namespaces/deployment-8609/replicasets/test-rollover-controller,UID:ac14c9f1-1499-434f-a5dc-b55d6f2c4f92,ResourceVersion:22937821,Generation:2,CreationTimestamp:2020-02-03 12:56:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 143d99b3-e3fd-416e-93f7-85e0a4864574 0xc0023d0657 0xc0023d0658}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Feb 3 12:57:04.457: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-9b8b997cf,GenerateName:,Namespace:deployment-8609,SelfLink:/apis/apps/v1/namespaces/deployment-8609/replicasets/test-rollover-deployment-9b8b997cf,UID:aa52215c-b8e0-42a7-8079-88e9847cd36d,ResourceVersion:22937773,Generation:2,CreationTimestamp:2020-02-03 12:56:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 143d99b3-e3fd-416e-93f7-85e0a4864574 0xc0023d07f0 0xc0023d07f1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Feb 3 12:57:04.462: INFO: Pod "test-rollover-deployment-854595fc44-d8jrj" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44-d8jrj,GenerateName:test-rollover-deployment-854595fc44-,Namespace:deployment-8609,SelfLink:/api/v1/namespaces/deployment-8609/pods/test-rollover-deployment-854595fc44-d8jrj,UID:c68a0d8d-91de-4366-af7d-103f9549cf99,ResourceVersion:22937796,Generation:0,CreationTimestamp:2020-02-03 12:56:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-854595fc44 634b89f9-ea2f-4424-a489-f52a03cdfd37 0xc002444da7 0xc002444da8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-phjc2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-phjc2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-phjc2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002444e20} {node.kubernetes.io/unreachable Exists NoExecute 0xc002444e40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 12:56:43 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 12:56:52 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 12:56:52 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 12:56:42 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2020-02-03 12:56:43 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-02-03 12:56:50 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://8a7380ac14d4a3e19456095d4f8a57b59e979eed3116e4151244ddb7e6510d40}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 3 12:57:04.463: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-8609" for this suite. Feb 3 12:57:12.515: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 3 12:57:12.630: INFO: namespace deployment-8609 deletion completed in 8.160266272s • [SLOW TEST:43.916 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 3 12:57:12.633: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating secret secrets-1711/secret-test-db40ea8d-45d5-4475-a1db-503f00e97f11 STEP: Creating a pod to test consume secrets Feb 3 12:57:12.846: INFO: Waiting up to 5m0s for pod "pod-configmaps-c8777040-be72-4e75-9469-a002f0c47d4b" in namespace "secrets-1711" to be "success or failure" Feb 3 12:57:12.871: INFO: Pod "pod-configmaps-c8777040-be72-4e75-9469-a002f0c47d4b": Phase="Pending", Reason="", readiness=false. Elapsed: 25.209658ms Feb 3 12:57:14.880: INFO: Pod "pod-configmaps-c8777040-be72-4e75-9469-a002f0c47d4b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034418659s Feb 3 12:57:16.892: INFO: Pod "pod-configmaps-c8777040-be72-4e75-9469-a002f0c47d4b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046660723s Feb 3 12:57:18.911: INFO: Pod "pod-configmaps-c8777040-be72-4e75-9469-a002f0c47d4b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.065649066s Feb 3 12:57:20.930: INFO: Pod "pod-configmaps-c8777040-be72-4e75-9469-a002f0c47d4b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.084192222s Feb 3 12:57:22.942: INFO: Pod "pod-configmaps-c8777040-be72-4e75-9469-a002f0c47d4b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.095951351s STEP: Saw pod success Feb 3 12:57:22.942: INFO: Pod "pod-configmaps-c8777040-be72-4e75-9469-a002f0c47d4b" satisfied condition "success or failure" Feb 3 12:57:22.947: INFO: Trying to get logs from node iruya-node pod pod-configmaps-c8777040-be72-4e75-9469-a002f0c47d4b container env-test: STEP: delete the pod Feb 3 12:57:23.126: INFO: Waiting for pod pod-configmaps-c8777040-be72-4e75-9469-a002f0c47d4b to disappear Feb 3 12:57:23.141: INFO: Pod pod-configmaps-c8777040-be72-4e75-9469-a002f0c47d4b no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 3 12:57:23.141: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1711" for this suite. Feb 3 12:57:29.178: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 3 12:57:29.341: INFO: namespace secrets-1711 deletion completed in 6.185855456s • [SLOW TEST:16.708 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 3 12:57:29.342: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Feb 3 12:57:51.579: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7284 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 3 12:57:51.580: INFO: >>> kubeConfig: /root/.kube/config I0203 12:57:51.668522 8 log.go:172] (0xc0000ed810) (0xc000db9cc0) Create stream I0203 12:57:51.668604 8 log.go:172] (0xc0000ed810) (0xc000db9cc0) Stream added, broadcasting: 1 I0203 12:57:51.677725 8 log.go:172] (0xc0000ed810) Reply frame received for 1 I0203 12:57:51.677842 8 log.go:172] (0xc0000ed810) (0xc00239c000) Create stream I0203 12:57:51.677858 8 log.go:172] (0xc0000ed810) (0xc00239c000) Stream added, broadcasting: 3 I0203 12:57:51.680836 8 log.go:172] (0xc0000ed810) Reply frame received for 3 I0203 12:57:51.680879 8 log.go:172] (0xc0000ed810) (0xc000db9e00) Create stream I0203 12:57:51.680895 8 log.go:172] (0xc0000ed810) (0xc000db9e00) Stream added, broadcasting: 5 I0203 12:57:51.683079 8 log.go:172] (0xc0000ed810) Reply frame received for 5 I0203 12:57:51.917585 8 log.go:172] (0xc0000ed810) Data frame received for 3 I0203 12:57:51.917793 8 log.go:172] (0xc00239c000) (3) Data frame handling I0203 12:57:51.918139 8 log.go:172] (0xc00239c000) (3) Data frame sent I0203 12:57:52.065212 8 log.go:172] (0xc0000ed810) (0xc00239c000) Stream removed, broadcasting: 3 I0203 12:57:52.065525 8 log.go:172] (0xc0000ed810) Data frame received for 1 I0203 12:57:52.065545 8 log.go:172] (0xc000db9cc0) (1) Data frame handling I0203 12:57:52.065581 8 log.go:172] (0xc000db9cc0) (1) Data frame sent I0203 12:57:52.065740 8 log.go:172] (0xc0000ed810) (0xc000db9cc0) Stream removed, broadcasting: 1 I0203 12:57:52.067204 8 log.go:172] (0xc0000ed810) (0xc000db9e00) Stream removed, broadcasting: 5 I0203 12:57:52.067396 8 log.go:172] (0xc0000ed810) Go away received I0203 12:57:52.068355 8 log.go:172] (0xc0000ed810) (0xc000db9cc0) Stream removed, broadcasting: 1 I0203 12:57:52.068518 8 log.go:172] (0xc0000ed810) (0xc00239c000) Stream removed, broadcasting: 3 I0203 12:57:52.068527 8 log.go:172] (0xc0000ed810) (0xc000db9e00) Stream removed, broadcasting: 5 Feb 3 12:57:52.068: INFO: Exec stderr: "" Feb 3 12:57:52.068: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7284 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 3 12:57:52.069: INFO: >>> kubeConfig: /root/.kube/config I0203 12:57:52.143269 8 log.go:172] (0xc00094a8f0) (0xc00212a000) Create stream I0203 12:57:52.143493 8 log.go:172] (0xc00094a8f0) (0xc00212a000) Stream added, broadcasting: 1 I0203 12:57:52.151784 8 log.go:172] (0xc00094a8f0) Reply frame received for 1 I0203 12:57:52.151871 8 log.go:172] (0xc00094a8f0) (0xc00212a0a0) Create stream I0203 12:57:52.151910 8 log.go:172] (0xc00094a8f0) (0xc00212a0a0) Stream added, broadcasting: 3 I0203 12:57:52.159882 8 log.go:172] (0xc00094a8f0) Reply frame received for 3 I0203 12:57:52.159957 8 log.go:172] (0xc00094a8f0) (0xc00194e820) Create stream I0203 12:57:52.159985 8 log.go:172] (0xc00094a8f0) (0xc00194e820) Stream added, broadcasting: 5 I0203 12:57:52.162138 8 log.go:172] (0xc00094a8f0) Reply frame received for 5 I0203 12:57:52.300623 8 log.go:172] (0xc00094a8f0) Data frame received for 3 I0203 12:57:52.300729 8 log.go:172] (0xc00212a0a0) (3) Data frame handling I0203 12:57:52.300792 8 log.go:172] (0xc00212a0a0) (3) Data frame sent I0203 12:57:52.571472 8 log.go:172] (0xc00094a8f0) (0xc00212a0a0) Stream removed, broadcasting: 3 I0203 12:57:52.571864 8 log.go:172] (0xc00094a8f0) Data frame received for 1 I0203 12:57:52.571907 8 log.go:172] (0xc00212a000) (1) Data frame handling I0203 12:57:52.571946 8 log.go:172] (0xc00212a000) (1) Data frame sent I0203 12:57:52.571963 8 log.go:172] (0xc00094a8f0) (0xc00212a000) Stream removed, broadcasting: 1 I0203 12:57:52.572246 8 log.go:172] (0xc00094a8f0) (0xc00194e820) Stream removed, broadcasting: 5 I0203 12:57:52.572566 8 log.go:172] (0xc00094a8f0) (0xc00212a000) Stream removed, broadcasting: 1 I0203 12:57:52.572644 8 log.go:172] (0xc00094a8f0) Go away received I0203 12:57:52.572757 8 log.go:172] (0xc00094a8f0) (0xc00212a0a0) Stream removed, broadcasting: 3 I0203 12:57:52.572782 8 log.go:172] (0xc00094a8f0) (0xc00194e820) Stream removed, broadcasting: 5 Feb 3 12:57:52.572: INFO: Exec stderr: "" Feb 3 12:57:52.573: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7284 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 3 12:57:52.573: INFO: >>> kubeConfig: /root/.kube/config I0203 12:57:52.676360 8 log.go:172] (0xc00094b600) (0xc00212a280) Create stream I0203 12:57:52.676671 8 log.go:172] (0xc00094b600) (0xc00212a280) Stream added, broadcasting: 1 I0203 12:57:52.685691 8 log.go:172] (0xc00094b600) Reply frame received for 1 I0203 12:57:52.685758 8 log.go:172] (0xc00094b600) (0xc00194ea00) Create stream I0203 12:57:52.685773 8 log.go:172] (0xc00094b600) (0xc00194ea00) Stream added, broadcasting: 3 I0203 12:57:52.687214 8 log.go:172] (0xc00094b600) Reply frame received for 3 I0203 12:57:52.687248 8 log.go:172] (0xc00094b600) (0xc002239220) Create stream I0203 12:57:52.687256 8 log.go:172] (0xc00094b600) (0xc002239220) Stream added, broadcasting: 5 I0203 12:57:52.693278 8 log.go:172] (0xc00094b600) Reply frame received for 5 I0203 12:57:52.857940 8 log.go:172] (0xc00094b600) Data frame received for 3 I0203 12:57:52.858147 8 log.go:172] (0xc00194ea00) (3) Data frame handling I0203 12:57:52.858248 8 log.go:172] (0xc00194ea00) (3) Data frame sent I0203 12:57:52.991003 8 log.go:172] (0xc00094b600) Data frame received for 1 I0203 12:57:52.991261 8 log.go:172] (0xc00212a280) (1) Data frame handling I0203 12:57:52.991368 8 log.go:172] (0xc00212a280) (1) Data frame sent I0203 12:57:52.991407 8 log.go:172] (0xc00094b600) (0xc00212a280) Stream removed, broadcasting: 1 I0203 12:57:52.991476 8 log.go:172] (0xc00094b600) (0xc00194ea00) Stream removed, broadcasting: 3 I0203 12:57:52.991699 8 log.go:172] (0xc00094b600) (0xc002239220) Stream removed, broadcasting: 5 I0203 12:57:52.991753 8 log.go:172] (0xc00094b600) Go away received I0203 12:57:52.991836 8 log.go:172] (0xc00094b600) (0xc00212a280) Stream removed, broadcasting: 1 I0203 12:57:52.991881 8 log.go:172] (0xc00094b600) (0xc00194ea00) Stream removed, broadcasting: 3 I0203 12:57:52.991885 8 log.go:172] (0xc00094b600) (0xc002239220) Stream removed, broadcasting: 5 Feb 3 12:57:52.991: INFO: Exec stderr: "" Feb 3 12:57:52.992: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7284 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 3 12:57:52.992: INFO: >>> kubeConfig: /root/.kube/config I0203 12:57:53.052546 8 log.go:172] (0xc000f8a8f0) (0xc0022395e0) Create stream I0203 12:57:53.052947 8 log.go:172] (0xc000f8a8f0) (0xc0022395e0) Stream added, broadcasting: 1 I0203 12:57:53.062357 8 log.go:172] (0xc000f8a8f0) Reply frame received for 1 I0203 12:57:53.062526 8 log.go:172] (0xc000f8a8f0) (0xc00239c140) Create stream I0203 12:57:53.062582 8 log.go:172] (0xc000f8a8f0) (0xc00239c140) Stream added, broadcasting: 3 I0203 12:57:53.064187 8 log.go:172] (0xc000f8a8f0) Reply frame received for 3 I0203 12:57:53.064240 8 log.go:172] (0xc000f8a8f0) (0xc00194eaa0) Create stream I0203 12:57:53.064247 8 log.go:172] (0xc000f8a8f0) (0xc00194eaa0) Stream added, broadcasting: 5 I0203 12:57:53.065568 8 log.go:172] (0xc000f8a8f0) Reply frame received for 5 I0203 12:57:53.207186 8 log.go:172] (0xc000f8a8f0) Data frame received for 3 I0203 12:57:53.207254 8 log.go:172] (0xc00239c140) (3) Data frame handling I0203 12:57:53.207276 8 log.go:172] (0xc00239c140) (3) Data frame sent I0203 12:57:53.329169 8 log.go:172] (0xc000f8a8f0) Data frame received for 1 I0203 12:57:53.329341 8 log.go:172] (0xc0022395e0) (1) Data frame handling I0203 12:57:53.329396 8 log.go:172] (0xc0022395e0) (1) Data frame sent I0203 12:57:53.329426 8 log.go:172] (0xc000f8a8f0) (0xc0022395e0) Stream removed, broadcasting: 1 I0203 12:57:53.329879 8 log.go:172] (0xc000f8a8f0) (0xc00239c140) Stream removed, broadcasting: 3 I0203 12:57:53.331011 8 log.go:172] (0xc000f8a8f0) (0xc00194eaa0) Stream removed, broadcasting: 5 I0203 12:57:53.331526 8 log.go:172] (0xc000f8a8f0) (0xc0022395e0) Stream removed, broadcasting: 1 I0203 12:57:53.331637 8 log.go:172] (0xc000f8a8f0) (0xc00239c140) Stream removed, broadcasting: 3 I0203 12:57:53.331652 8 log.go:172] (0xc000f8a8f0) (0xc00194eaa0) Stream removed, broadcasting: 5 I0203 12:57:53.331735 8 log.go:172] (0xc000f8a8f0) Go away received Feb 3 12:57:53.331: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Feb 3 12:57:53.331: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7284 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 3 12:57:53.332: INFO: >>> kubeConfig: /root/.kube/config I0203 12:57:53.406732 8 log.go:172] (0xc000f9a840) (0xc00194ed20) Create stream I0203 12:57:53.406860 8 log.go:172] (0xc000f9a840) (0xc00194ed20) Stream added, broadcasting: 1 I0203 12:57:53.416560 8 log.go:172] (0xc000f9a840) Reply frame received for 1 I0203 12:57:53.416637 8 log.go:172] (0xc000f9a840) (0xc00212a320) Create stream I0203 12:57:53.416645 8 log.go:172] (0xc000f9a840) (0xc00212a320) Stream added, broadcasting: 3 I0203 12:57:53.419323 8 log.go:172] (0xc000f9a840) Reply frame received for 3 I0203 12:57:53.419421 8 log.go:172] (0xc000f9a840) (0xc00239c1e0) Create stream I0203 12:57:53.419455 8 log.go:172] (0xc000f9a840) (0xc00239c1e0) Stream added, broadcasting: 5 I0203 12:57:53.423203 8 log.go:172] (0xc000f9a840) Reply frame received for 5 I0203 12:57:53.538028 8 log.go:172] (0xc000f9a840) Data frame received for 3 I0203 12:57:53.538152 8 log.go:172] (0xc00212a320) (3) Data frame handling I0203 12:57:53.538180 8 log.go:172] (0xc00212a320) (3) Data frame sent I0203 12:57:53.709236 8 log.go:172] (0xc000f9a840) Data frame received for 1 I0203 12:57:53.709378 8 log.go:172] (0xc000f9a840) (0xc00212a320) Stream removed, broadcasting: 3 I0203 12:57:53.709509 8 log.go:172] (0xc000f9a840) (0xc00239c1e0) Stream removed, broadcasting: 5 I0203 12:57:53.709580 8 log.go:172] (0xc00194ed20) (1) Data frame handling I0203 12:57:53.709616 8 log.go:172] (0xc00194ed20) (1) Data frame sent I0203 12:57:53.709628 8 log.go:172] (0xc000f9a840) (0xc00194ed20) Stream removed, broadcasting: 1 I0203 12:57:53.709644 8 log.go:172] (0xc000f9a840) Go away received I0203 12:57:53.709989 8 log.go:172] (0xc000f9a840) (0xc00194ed20) Stream removed, broadcasting: 1 I0203 12:57:53.710027 8 log.go:172] (0xc000f9a840) (0xc00212a320) Stream removed, broadcasting: 3 I0203 12:57:53.710051 8 log.go:172] (0xc000f9a840) (0xc00239c1e0) Stream removed, broadcasting: 5 Feb 3 12:57:53.710: INFO: Exec stderr: "" Feb 3 12:57:53.710: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7284 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 3 12:57:53.710: INFO: >>> kubeConfig: /root/.kube/config I0203 12:57:53.806981 8 log.go:172] (0xc001672420) (0xc00212a6e0) Create stream I0203 12:57:53.807108 8 log.go:172] (0xc001672420) (0xc00212a6e0) Stream added, broadcasting: 1 I0203 12:57:53.821940 8 log.go:172] (0xc001672420) Reply frame received for 1 I0203 12:57:53.822314 8 log.go:172] (0xc001672420) (0xc00151a000) Create stream I0203 12:57:53.822339 8 log.go:172] (0xc001672420) (0xc00151a000) Stream added, broadcasting: 3 I0203 12:57:53.828743 8 log.go:172] (0xc001672420) Reply frame received for 3 I0203 12:57:53.828862 8 log.go:172] (0xc001672420) (0xc00194edc0) Create stream I0203 12:57:53.828873 8 log.go:172] (0xc001672420) (0xc00194edc0) Stream added, broadcasting: 5 I0203 12:57:53.830886 8 log.go:172] (0xc001672420) Reply frame received for 5 I0203 12:57:54.032989 8 log.go:172] (0xc001672420) Data frame received for 3 I0203 12:57:54.033097 8 log.go:172] (0xc00151a000) (3) Data frame handling I0203 12:57:54.033137 8 log.go:172] (0xc00151a000) (3) Data frame sent I0203 12:57:54.203972 8 log.go:172] (0xc001672420) Data frame received for 1 I0203 12:57:54.204111 8 log.go:172] (0xc001672420) (0xc00151a000) Stream removed, broadcasting: 3 I0203 12:57:54.204244 8 log.go:172] (0xc00212a6e0) (1) Data frame handling I0203 12:57:54.204270 8 log.go:172] (0xc00212a6e0) (1) Data frame sent I0203 12:57:54.204280 8 log.go:172] (0xc001672420) (0xc00212a6e0) Stream removed, broadcasting: 1 I0203 12:57:54.204482 8 log.go:172] (0xc001672420) (0xc00194edc0) Stream removed, broadcasting: 5 I0203 12:57:54.204666 8 log.go:172] (0xc001672420) Go away received I0203 12:57:54.204701 8 log.go:172] (0xc001672420) (0xc00212a6e0) Stream removed, broadcasting: 1 I0203 12:57:54.204761 8 log.go:172] (0xc001672420) (0xc00151a000) Stream removed, broadcasting: 3 I0203 12:57:54.204768 8 log.go:172] (0xc001672420) (0xc00194edc0) Stream removed, broadcasting: 5 Feb 3 12:57:54.204: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Feb 3 12:57:54.204: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7284 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 3 12:57:54.204: INFO: >>> kubeConfig: /root/.kube/config I0203 12:57:54.257555 8 log.go:172] (0xc001672dc0) (0xc00212a960) Create stream I0203 12:57:54.257638 8 log.go:172] (0xc001672dc0) (0xc00212a960) Stream added, broadcasting: 1 I0203 12:57:54.269109 8 log.go:172] (0xc001672dc0) Reply frame received for 1 I0203 12:57:54.269183 8 log.go:172] (0xc001672dc0) (0xc001356140) Create stream I0203 12:57:54.269196 8 log.go:172] (0xc001672dc0) (0xc001356140) Stream added, broadcasting: 3 I0203 12:57:54.272661 8 log.go:172] (0xc001672dc0) Reply frame received for 3 I0203 12:57:54.272691 8 log.go:172] (0xc001672dc0) (0xc00212aa00) Create stream I0203 12:57:54.272698 8 log.go:172] (0xc001672dc0) (0xc00212aa00) Stream added, broadcasting: 5 I0203 12:57:54.274489 8 log.go:172] (0xc001672dc0) Reply frame received for 5 I0203 12:57:54.382345 8 log.go:172] (0xc001672dc0) Data frame received for 3 I0203 12:57:54.382438 8 log.go:172] (0xc001356140) (3) Data frame handling I0203 12:57:54.382471 8 log.go:172] (0xc001356140) (3) Data frame sent I0203 12:57:54.611178 8 log.go:172] (0xc001672dc0) Data frame received for 1 I0203 12:57:54.611305 8 log.go:172] (0xc00212a960) (1) Data frame handling I0203 12:57:54.611368 8 log.go:172] (0xc00212a960) (1) Data frame sent I0203 12:57:54.611408 8 log.go:172] (0xc001672dc0) (0xc001356140) Stream removed, broadcasting: 3 I0203 12:57:54.611500 8 log.go:172] (0xc001672dc0) (0xc00212a960) Stream removed, broadcasting: 1 I0203 12:57:54.611712 8 log.go:172] (0xc001672dc0) (0xc00212aa00) Stream removed, broadcasting: 5 I0203 12:57:54.611916 8 log.go:172] (0xc001672dc0) Go away received I0203 12:57:54.612040 8 log.go:172] (0xc001672dc0) (0xc00212a960) Stream removed, broadcasting: 1 I0203 12:57:54.612078 8 log.go:172] (0xc001672dc0) (0xc001356140) Stream removed, broadcasting: 3 I0203 12:57:54.612118 8 log.go:172] (0xc001672dc0) (0xc00212aa00) Stream removed, broadcasting: 5 Feb 3 12:57:54.612: INFO: Exec stderr: "" Feb 3 12:57:54.613: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7284 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 3 12:57:54.613: INFO: >>> kubeConfig: /root/.kube/config I0203 12:57:54.702846 8 log.go:172] (0xc000f9bb80) (0xc00194f0e0) Create stream I0203 12:57:54.702943 8 log.go:172] (0xc000f9bb80) (0xc00194f0e0) Stream added, broadcasting: 1 I0203 12:57:54.714362 8 log.go:172] (0xc000f9bb80) Reply frame received for 1 I0203 12:57:54.714482 8 log.go:172] (0xc000f9bb80) (0xc00239c280) Create stream I0203 12:57:54.714504 8 log.go:172] (0xc000f9bb80) (0xc00239c280) Stream added, broadcasting: 3 I0203 12:57:54.716377 8 log.go:172] (0xc000f9bb80) Reply frame received for 3 I0203 12:57:54.716424 8 log.go:172] (0xc000f9bb80) (0xc00194f220) Create stream I0203 12:57:54.716438 8 log.go:172] (0xc000f9bb80) (0xc00194f220) Stream added, broadcasting: 5 I0203 12:57:54.718045 8 log.go:172] (0xc000f9bb80) Reply frame received for 5 I0203 12:57:54.869116 8 log.go:172] (0xc000f9bb80) Data frame received for 3 I0203 12:57:54.869194 8 log.go:172] (0xc00239c280) (3) Data frame handling I0203 12:57:54.869218 8 log.go:172] (0xc00239c280) (3) Data frame sent I0203 12:57:54.968449 8 log.go:172] (0xc000f9bb80) (0xc00239c280) Stream removed, broadcasting: 3 I0203 12:57:54.968812 8 log.go:172] (0xc000f9bb80) Data frame received for 1 I0203 12:57:54.968941 8 log.go:172] (0xc000f9bb80) (0xc00194f220) Stream removed, broadcasting: 5 I0203 12:57:54.969230 8 log.go:172] (0xc00194f0e0) (1) Data frame handling I0203 12:57:54.969282 8 log.go:172] (0xc00194f0e0) (1) Data frame sent I0203 12:57:54.969305 8 log.go:172] (0xc000f9bb80) (0xc00194f0e0) Stream removed, broadcasting: 1 I0203 12:57:54.969329 8 log.go:172] (0xc000f9bb80) Go away received I0203 12:57:54.969754 8 log.go:172] (0xc000f9bb80) (0xc00194f0e0) Stream removed, broadcasting: 1 I0203 12:57:54.969779 8 log.go:172] (0xc000f9bb80) (0xc00239c280) Stream removed, broadcasting: 3 I0203 12:57:54.969783 8 log.go:172] (0xc000f9bb80) (0xc00194f220) Stream removed, broadcasting: 5 Feb 3 12:57:54.969: INFO: Exec stderr: "" Feb 3 12:57:54.969: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7284 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 3 12:57:54.970: INFO: >>> kubeConfig: /root/.kube/config I0203 12:57:55.025671 8 log.go:172] (0xc0021306e0) (0xc00194f540) Create stream I0203 12:57:55.025750 8 log.go:172] (0xc0021306e0) (0xc00194f540) Stream added, broadcasting: 1 I0203 12:57:55.031705 8 log.go:172] (0xc0021306e0) Reply frame received for 1 I0203 12:57:55.031741 8 log.go:172] (0xc0021306e0) (0xc001356320) Create stream I0203 12:57:55.031749 8 log.go:172] (0xc0021306e0) (0xc001356320) Stream added, broadcasting: 3 I0203 12:57:55.032757 8 log.go:172] (0xc0021306e0) Reply frame received for 3 I0203 12:57:55.032784 8 log.go:172] (0xc0021306e0) (0xc00212ab40) Create stream I0203 12:57:55.032791 8 log.go:172] (0xc0021306e0) (0xc00212ab40) Stream added, broadcasting: 5 I0203 12:57:55.033963 8 log.go:172] (0xc0021306e0) Reply frame received for 5 I0203 12:57:55.104141 8 log.go:172] (0xc0021306e0) Data frame received for 3 I0203 12:57:55.104189 8 log.go:172] (0xc001356320) (3) Data frame handling I0203 12:57:55.104216 8 log.go:172] (0xc001356320) (3) Data frame sent I0203 12:57:55.222588 8 log.go:172] (0xc0021306e0) (0xc001356320) Stream removed, broadcasting: 3 I0203 12:57:55.222935 8 log.go:172] (0xc0021306e0) Data frame received for 1 I0203 12:57:55.223107 8 log.go:172] (0xc0021306e0) (0xc00212ab40) Stream removed, broadcasting: 5 I0203 12:57:55.223183 8 log.go:172] (0xc00194f540) (1) Data frame handling I0203 12:57:55.223205 8 log.go:172] (0xc00194f540) (1) Data frame sent I0203 12:57:55.223223 8 log.go:172] (0xc0021306e0) (0xc00194f540) Stream removed, broadcasting: 1 I0203 12:57:55.223244 8 log.go:172] (0xc0021306e0) Go away received I0203 12:57:55.223847 8 log.go:172] (0xc0021306e0) (0xc00194f540) Stream removed, broadcasting: 1 I0203 12:57:55.223865 8 log.go:172] (0xc0021306e0) (0xc001356320) Stream removed, broadcasting: 3 I0203 12:57:55.223876 8 log.go:172] (0xc0021306e0) (0xc00212ab40) Stream removed, broadcasting: 5 Feb 3 12:57:55.223: INFO: Exec stderr: "" Feb 3 12:57:55.224: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7284 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 3 12:57:55.224: INFO: >>> kubeConfig: /root/.kube/config I0203 12:57:55.293372 8 log.go:172] (0xc001673c30) (0xc00212af00) Create stream I0203 12:57:55.293465 8 log.go:172] (0xc001673c30) (0xc00212af00) Stream added, broadcasting: 1 I0203 12:57:55.302945 8 log.go:172] (0xc001673c30) Reply frame received for 1 I0203 12:57:55.303031 8 log.go:172] (0xc001673c30) (0xc00151a0a0) Create stream I0203 12:57:55.303047 8 log.go:172] (0xc001673c30) (0xc00151a0a0) Stream added, broadcasting: 3 I0203 12:57:55.304711 8 log.go:172] (0xc001673c30) Reply frame received for 3 I0203 12:57:55.304764 8 log.go:172] (0xc001673c30) (0xc0013563c0) Create stream I0203 12:57:55.304785 8 log.go:172] (0xc001673c30) (0xc0013563c0) Stream added, broadcasting: 5 I0203 12:57:55.306461 8 log.go:172] (0xc001673c30) Reply frame received for 5 I0203 12:57:55.476541 8 log.go:172] (0xc001673c30) Data frame received for 3 I0203 12:57:55.476626 8 log.go:172] (0xc00151a0a0) (3) Data frame handling I0203 12:57:55.476650 8 log.go:172] (0xc00151a0a0) (3) Data frame sent I0203 12:57:55.598772 8 log.go:172] (0xc001673c30) Data frame received for 1 I0203 12:57:55.598890 8 log.go:172] (0xc001673c30) (0xc00151a0a0) Stream removed, broadcasting: 3 I0203 12:57:55.598970 8 log.go:172] (0xc00212af00) (1) Data frame handling I0203 12:57:55.599071 8 log.go:172] (0xc00212af00) (1) Data frame sent I0203 12:57:55.599112 8 log.go:172] (0xc001673c30) (0xc0013563c0) Stream removed, broadcasting: 5 I0203 12:57:55.599178 8 log.go:172] (0xc001673c30) (0xc00212af00) Stream removed, broadcasting: 1 I0203 12:57:55.599214 8 log.go:172] (0xc001673c30) Go away received I0203 12:57:55.599453 8 log.go:172] (0xc001673c30) (0xc00212af00) Stream removed, broadcasting: 1 I0203 12:57:55.599465 8 log.go:172] (0xc001673c30) (0xc00151a0a0) Stream removed, broadcasting: 3 I0203 12:57:55.599470 8 log.go:172] (0xc001673c30) (0xc0013563c0) Stream removed, broadcasting: 5 Feb 3 12:57:55.599: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 3 12:57:55.599: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-7284" for this suite. Feb 3 12:58:57.773: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 3 12:58:57.967: INFO: namespace e2e-kubelet-etc-hosts-7284 deletion completed in 1m2.351902317s • [SLOW TEST:88.626 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 3 12:58:57.969: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on node default medium Feb 3 12:58:58.131: INFO: Waiting up to 5m0s for pod "pod-93409b9f-51b6-4a4a-a1a0-19296e2f8095" in namespace "emptydir-2196" to be "success or failure" Feb 3 12:58:58.138: INFO: Pod "pod-93409b9f-51b6-4a4a-a1a0-19296e2f8095": Phase="Pending", Reason="", readiness=false. Elapsed: 6.304454ms Feb 3 12:59:00.147: INFO: Pod "pod-93409b9f-51b6-4a4a-a1a0-19296e2f8095": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015725035s Feb 3 12:59:02.166: INFO: Pod "pod-93409b9f-51b6-4a4a-a1a0-19296e2f8095": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034630299s Feb 3 12:59:04.175: INFO: Pod "pod-93409b9f-51b6-4a4a-a1a0-19296e2f8095": Phase="Pending", Reason="", readiness=false. Elapsed: 6.043646882s Feb 3 12:59:06.183: INFO: Pod "pod-93409b9f-51b6-4a4a-a1a0-19296e2f8095": Phase="Pending", Reason="", readiness=false. Elapsed: 8.052022962s Feb 3 12:59:08.192: INFO: Pod "pod-93409b9f-51b6-4a4a-a1a0-19296e2f8095": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.060261276s STEP: Saw pod success Feb 3 12:59:08.192: INFO: Pod "pod-93409b9f-51b6-4a4a-a1a0-19296e2f8095" satisfied condition "success or failure" Feb 3 12:59:08.195: INFO: Trying to get logs from node iruya-node pod pod-93409b9f-51b6-4a4a-a1a0-19296e2f8095 container test-container: STEP: delete the pod Feb 3 12:59:08.256: INFO: Waiting for pod pod-93409b9f-51b6-4a4a-a1a0-19296e2f8095 to disappear Feb 3 12:59:08.265: INFO: Pod pod-93409b9f-51b6-4a4a-a1a0-19296e2f8095 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 3 12:59:08.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2196" for this suite. Feb 3 12:59:14.352: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 3 12:59:14.482: INFO: namespace emptydir-2196 deletion completed in 6.175214615s • [SLOW TEST:16.513 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 3 12:59:14.483: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Feb 3 12:59:27.358: INFO: Successfully updated pod "pod-update-9ca0ecf7-837c-4e48-8780-e462237fcd33" STEP: verifying the updated pod is in kubernetes Feb 3 12:59:27.413: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 3 12:59:27.413: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7405" for this suite. Feb 3 12:59:49.520: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 3 12:59:49.659: INFO: namespace pods-7405 deletion completed in 22.179715355s • [SLOW TEST:35.176 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 3 12:59:49.659: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 3 12:59:55.309: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-4129" for this suite. Feb 3 13:00:01.455: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 3 13:00:01.614: INFO: namespace watch-4129 deletion completed in 6.260205749s • [SLOW TEST:11.955 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 3 13:00:01.615: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1292 STEP: creating an rc Feb 3 13:00:01.728: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9871' Feb 3 13:00:04.561: INFO: stderr: "" Feb 3 13:00:04.562: INFO: stdout: "replicationcontroller/redis-master created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Waiting for Redis master to start. Feb 3 13:00:05.587: INFO: Selector matched 1 pods for map[app:redis] Feb 3 13:00:05.587: INFO: Found 0 / 1 Feb 3 13:00:06.579: INFO: Selector matched 1 pods for map[app:redis] Feb 3 13:00:06.579: INFO: Found 0 / 1 Feb 3 13:00:07.567: INFO: Selector matched 1 pods for map[app:redis] Feb 3 13:00:07.567: INFO: Found 0 / 1 Feb 3 13:00:08.578: INFO: Selector matched 1 pods for map[app:redis] Feb 3 13:00:08.579: INFO: Found 0 / 1 Feb 3 13:00:09.572: INFO: Selector matched 1 pods for map[app:redis] Feb 3 13:00:09.572: INFO: Found 0 / 1 Feb 3 13:00:10.579: INFO: Selector matched 1 pods for map[app:redis] Feb 3 13:00:10.579: INFO: Found 0 / 1 Feb 3 13:00:11.574: INFO: Selector matched 1 pods for map[app:redis] Feb 3 13:00:11.574: INFO: Found 1 / 1 Feb 3 13:00:11.574: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Feb 3 13:00:11.594: INFO: Selector matched 1 pods for map[app:redis] Feb 3 13:00:11.594: INFO: ForEach: Found 1 pods from the filter. Now looping through them. STEP: checking for a matching strings Feb 3 13:00:11.595: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-7sxpc redis-master --namespace=kubectl-9871' Feb 3 13:00:11.795: INFO: stderr: "" Feb 3 13:00:11.795: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 03 Feb 13:00:10.831 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 03 Feb 13:00:10.831 # Server started, Redis version 3.2.12\n1:M 03 Feb 13:00:10.832 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 03 Feb 13:00:10.832 * The server is now ready to accept connections on port 6379\n" STEP: limiting log lines Feb 3 13:00:11.796: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-7sxpc redis-master --namespace=kubectl-9871 --tail=1' Feb 3 13:00:12.067: INFO: stderr: "" Feb 3 13:00:12.067: INFO: stdout: "1:M 03 Feb 13:00:10.832 * The server is now ready to accept connections on port 6379\n" STEP: limiting log bytes Feb 3 13:00:12.067: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-7sxpc redis-master --namespace=kubectl-9871 --limit-bytes=1' Feb 3 13:00:12.212: INFO: stderr: "" Feb 3 13:00:12.212: INFO: stdout: " " STEP: exposing timestamps Feb 3 13:00:12.212: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-7sxpc redis-master --namespace=kubectl-9871 --tail=1 --timestamps' Feb 3 13:00:12.366: INFO: stderr: "" Feb 3 13:00:12.366: INFO: stdout: "2020-02-03T13:00:10.832571856Z 1:M 03 Feb 13:00:10.832 * The server is now ready to accept connections on port 6379\n" STEP: restricting to a time range Feb 3 13:00:14.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-7sxpc redis-master --namespace=kubectl-9871 --since=1s' Feb 3 13:00:15.111: INFO: stderr: "" Feb 3 13:00:15.111: INFO: stdout: "" Feb 3 13:00:15.112: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-7sxpc redis-master --namespace=kubectl-9871 --since=24h' Feb 3 13:00:15.273: INFO: stderr: "" Feb 3 13:00:15.274: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 03 Feb 13:00:10.831 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 03 Feb 13:00:10.831 # Server started, Redis version 3.2.12\n1:M 03 Feb 13:00:10.832 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 03 Feb 13:00:10.832 * The server is now ready to accept connections on port 6379\n" [AfterEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298 STEP: using delete to clean up resources Feb 3 13:00:15.274: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9871' Feb 3 13:00:15.377: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 3 13:00:15.378: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n" Feb 3 13:00:15.378: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=kubectl-9871' Feb 3 13:00:15.510: INFO: stderr: "No resources found.\n" Feb 3 13:00:15.510: INFO: stdout: "" Feb 3 13:00:15.510: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=kubectl-9871 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Feb 3 13:00:15.686: INFO: stderr: "" Feb 3 13:00:15.687: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 3 13:00:15.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9871" for this suite. Feb 3 13:00:37.720: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 3 13:00:37.928: INFO: namespace kubectl-9871 deletion completed in 22.23293913s • [SLOW TEST:36.314 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 3 13:00:37.930: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 3 13:00:48.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-1740" for this suite. Feb 3 13:00:54.219: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 3 13:00:54.328: INFO: namespace emptydir-wrapper-1740 deletion completed in 6.127817707s • [SLOW TEST:16.398 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 3 13:00:54.328: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Feb 3 13:00:54.493: INFO: Waiting up to 5m0s for pod "downwardapi-volume-623ecf78-48a4-4890-a4dd-2eec62fc5241" in namespace "downward-api-9330" to be "success or failure" Feb 3 13:00:54.540: INFO: Pod "downwardapi-volume-623ecf78-48a4-4890-a4dd-2eec62fc5241": Phase="Pending", Reason="", readiness=false. Elapsed: 46.473444ms Feb 3 13:00:56.563: INFO: Pod "downwardapi-volume-623ecf78-48a4-4890-a4dd-2eec62fc5241": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069675572s Feb 3 13:00:58.578: INFO: Pod "downwardapi-volume-623ecf78-48a4-4890-a4dd-2eec62fc5241": Phase="Pending", Reason="", readiness=false. Elapsed: 4.084672378s Feb 3 13:01:00.597: INFO: Pod "downwardapi-volume-623ecf78-48a4-4890-a4dd-2eec62fc5241": Phase="Pending", Reason="", readiness=false. Elapsed: 6.103977398s Feb 3 13:01:02.609: INFO: Pod "downwardapi-volume-623ecf78-48a4-4890-a4dd-2eec62fc5241": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.115391409s STEP: Saw pod success Feb 3 13:01:02.609: INFO: Pod "downwardapi-volume-623ecf78-48a4-4890-a4dd-2eec62fc5241" satisfied condition "success or failure" Feb 3 13:01:02.616: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-623ecf78-48a4-4890-a4dd-2eec62fc5241 container client-container: STEP: delete the pod Feb 3 13:01:02.701: INFO: Waiting for pod downwardapi-volume-623ecf78-48a4-4890-a4dd-2eec62fc5241 to disappear Feb 3 13:01:02.706: INFO: Pod downwardapi-volume-623ecf78-48a4-4890-a4dd-2eec62fc5241 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 3 13:01:02.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9330" for this suite. Feb 3 13:01:12.737: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 3 13:01:13.000: INFO: namespace downward-api-9330 deletion completed in 10.288042955s • [SLOW TEST:18.672 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 3 13:01:13.001: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-4978, will wait for the garbage collector to delete the pods Feb 3 13:01:23.205: INFO: Deleting Job.batch foo took: 12.903507ms Feb 3 13:01:23.506: INFO: Terminating Job.batch foo pods took: 300.59763ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 3 13:02:06.717: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-4978" for this suite. Feb 3 13:02:12.745: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 3 13:02:12.858: INFO: namespace job-4978 deletion completed in 6.133690758s • [SLOW TEST:59.857 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 3 13:02:12.858: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: executing a command with run --rm and attach with stdin Feb 3 13:02:13.004: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3866 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Feb 3 13:02:23.217: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0203 13:02:21.876051 245 log.go:172] (0xc000106e70) (0xc0003d8be0) Create stream\nI0203 13:02:21.876130 245 log.go:172] (0xc000106e70) (0xc0003d8be0) Stream added, broadcasting: 1\nI0203 13:02:21.886343 245 log.go:172] (0xc000106e70) Reply frame received for 1\nI0203 13:02:21.886401 245 log.go:172] (0xc000106e70) (0xc0003d8000) Create stream\nI0203 13:02:21.886410 245 log.go:172] (0xc000106e70) (0xc0003d8000) Stream added, broadcasting: 3\nI0203 13:02:21.888071 245 log.go:172] (0xc000106e70) Reply frame received for 3\nI0203 13:02:21.888112 245 log.go:172] (0xc000106e70) (0xc0003d80a0) Create stream\nI0203 13:02:21.888126 245 log.go:172] (0xc000106e70) (0xc0003d80a0) Stream added, broadcasting: 5\nI0203 13:02:21.889502 245 log.go:172] (0xc000106e70) Reply frame received for 5\nI0203 13:02:21.889523 245 log.go:172] (0xc000106e70) (0xc0005f4140) Create stream\nI0203 13:02:21.889529 245 log.go:172] (0xc000106e70) (0xc0005f4140) Stream added, broadcasting: 7\nI0203 13:02:21.892160 245 log.go:172] (0xc000106e70) Reply frame received for 7\nI0203 13:02:21.892570 245 log.go:172] (0xc0003d8000) (3) Writing data frame\nI0203 13:02:21.892757 245 log.go:172] (0xc0003d8000) (3) Writing data frame\nI0203 13:02:21.900621 245 log.go:172] (0xc000106e70) Data frame received for 5\nI0203 13:02:21.900640 245 log.go:172] (0xc0003d80a0) (5) Data frame handling\nI0203 13:02:21.900654 245 log.go:172] (0xc0003d80a0) (5) Data frame sent\nI0203 13:02:21.904115 245 log.go:172] (0xc000106e70) Data frame received for 5\nI0203 13:02:21.904161 245 log.go:172] (0xc0003d80a0) (5) Data frame handling\nI0203 13:02:21.904180 245 log.go:172] (0xc0003d80a0) (5) Data frame sent\nI0203 13:02:23.162704 245 log.go:172] (0xc000106e70) Data frame received for 1\nI0203 13:02:23.162967 245 log.go:172] (0xc000106e70) (0xc0003d80a0) Stream removed, broadcasting: 5\nI0203 13:02:23.163129 245 log.go:172] (0xc0003d8be0) (1) Data frame handling\nI0203 13:02:23.163172 245 log.go:172] (0xc0003d8be0) (1) Data frame sent\nI0203 13:02:23.163287 245 log.go:172] (0xc000106e70) (0xc0005f4140) Stream removed, broadcasting: 7\nI0203 13:02:23.163388 245 log.go:172] (0xc000106e70) (0xc0003d8be0) Stream removed, broadcasting: 1\nI0203 13:02:23.163688 245 log.go:172] (0xc000106e70) (0xc0003d8000) Stream removed, broadcasting: 3\nI0203 13:02:23.163921 245 log.go:172] (0xc000106e70) Go away received\nI0203 13:02:23.164304 245 log.go:172] (0xc000106e70) (0xc0003d8be0) Stream removed, broadcasting: 1\nI0203 13:02:23.164349 245 log.go:172] (0xc000106e70) (0xc0003d8000) Stream removed, broadcasting: 3\nI0203 13:02:23.164466 245 log.go:172] (0xc000106e70) (0xc0003d80a0) Stream removed, broadcasting: 5\nI0203 13:02:23.164568 245 log.go:172] (0xc000106e70) (0xc0005f4140) Stream removed, broadcasting: 7\n" Feb 3 13:02:23.218: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 3 13:02:25.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3866" for this suite. Feb 3 13:02:31.265: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 3 13:02:31.389: INFO: namespace kubectl-3866 deletion completed in 6.155760493s • [SLOW TEST:18.531 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 3 13:02:31.389: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Feb 3 13:02:31.477: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3856' Feb 3 13:02:31.855: INFO: stderr: "" Feb 3 13:02:31.855: INFO: stdout: "replicationcontroller/redis-master created\n" Feb 3 13:02:31.855: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3856' Feb 3 13:02:32.665: INFO: stderr: "" Feb 3 13:02:32.665: INFO: stdout: "service/redis-master created\n" STEP: Waiting for Redis master to start. Feb 3 13:02:33.772: INFO: Selector matched 1 pods for map[app:redis] Feb 3 13:02:33.773: INFO: Found 0 / 1 Feb 3 13:02:34.676: INFO: Selector matched 1 pods for map[app:redis] Feb 3 13:02:34.676: INFO: Found 0 / 1 Feb 3 13:02:35.685: INFO: Selector matched 1 pods for map[app:redis] Feb 3 13:02:35.685: INFO: Found 0 / 1 Feb 3 13:02:36.677: INFO: Selector matched 1 pods for map[app:redis] Feb 3 13:02:36.677: INFO: Found 0 / 1 Feb 3 13:02:37.672: INFO: Selector matched 1 pods for map[app:redis] Feb 3 13:02:37.672: INFO: Found 0 / 1 Feb 3 13:02:38.686: INFO: Selector matched 1 pods for map[app:redis] Feb 3 13:02:38.686: INFO: Found 0 / 1 Feb 3 13:02:40.131: INFO: Selector matched 1 pods for map[app:redis] Feb 3 13:02:40.131: INFO: Found 0 / 1 Feb 3 13:02:40.674: INFO: Selector matched 1 pods for map[app:redis] Feb 3 13:02:40.675: INFO: Found 0 / 1 Feb 3 13:02:41.681: INFO: Selector matched 1 pods for map[app:redis] Feb 3 13:02:41.681: INFO: Found 1 / 1 Feb 3 13:02:41.681: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Feb 3 13:02:41.686: INFO: Selector matched 1 pods for map[app:redis] Feb 3 13:02:41.686: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Feb 3 13:02:41.687: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-pwltl --namespace=kubectl-3856' Feb 3 13:02:41.933: INFO: stderr: "" Feb 3 13:02:41.933: INFO: stdout: "Name: redis-master-pwltl\nNamespace: kubectl-3856\nPriority: 0\nNode: iruya-node/10.96.3.65\nStart Time: Mon, 03 Feb 2020 13:02:32 +0000\nLabels: app=redis\n role=master\nAnnotations: \nStatus: Running\nIP: 10.44.0.1\nControlled By: ReplicationController/redis-master\nContainers:\n redis-master:\n Container ID: docker://9b8738f4e5c448a76578dc65711630a01bb1d80b1a193331b0a271f3810e7f7f\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Image ID: docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Mon, 03 Feb 2020 13:02:39 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-pzxfq (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-pzxfq:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-pzxfq\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 10s default-scheduler Successfully assigned kubectl-3856/redis-master-pwltl to iruya-node\n Normal Pulled 5s kubelet, iruya-node Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n Normal Created 3s kubelet, iruya-node Created container redis-master\n Normal Started 1s kubelet, iruya-node Started container redis-master\n" Feb 3 13:02:41.933: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=kubectl-3856' Feb 3 13:02:42.117: INFO: stderr: "" Feb 3 13:02:42.118: INFO: stdout: "Name: redis-master\nNamespace: kubectl-3856\nSelector: app=redis,role=master\nLabels: app=redis\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=redis\n role=master\n Containers:\n redis-master:\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 11s replication-controller Created pod: redis-master-pwltl\n" Feb 3 13:02:42.118: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=kubectl-3856' Feb 3 13:02:42.380: INFO: stderr: "" Feb 3 13:02:42.380: INFO: stdout: "Name: redis-master\nNamespace: kubectl-3856\nLabels: app=redis\n role=master\nAnnotations: \nSelector: app=redis,role=master\nType: ClusterIP\nIP: 10.107.26.233\nPort: 6379/TCP\nTargetPort: redis-server/TCP\nEndpoints: 10.44.0.1:6379\nSession Affinity: None\nEvents: \n" Feb 3 13:02:42.391: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node iruya-node' Feb 3 13:02:42.596: INFO: stderr: "" Feb 3 13:02:42.597: INFO: stdout: "Name: iruya-node\nRoles: \nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=iruya-node\n kubernetes.io/os=linux\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 04 Aug 2019 09:01:39 +0000\nTaints: \nUnschedulable: false\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n NetworkUnavailable False Sat, 12 Oct 2019 11:56:49 +0000 Sat, 12 Oct 2019 11:56:49 +0000 WeaveIsUp Weave pod has set this\n MemoryPressure False Mon, 03 Feb 2020 13:02:37 +0000 Sun, 04 Aug 2019 09:01:39 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Mon, 03 Feb 2020 13:02:37 +0000 Sun, 04 Aug 2019 09:01:39 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Mon, 03 Feb 2020 13:02:37 +0000 Sun, 04 Aug 2019 09:01:39 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Mon, 03 Feb 2020 13:02:37 +0000 Sun, 04 Aug 2019 09:02:19 +0000 KubeletReady kubelet is posting ready status. AppArmor enabled\nAddresses:\n InternalIP: 10.96.3.65\n Hostname: iruya-node\nCapacity:\n cpu: 4\n ephemeral-storage: 20145724Ki\n hugepages-2Mi: 0\n memory: 4039076Ki\n pods: 110\nAllocatable:\n cpu: 4\n ephemeral-storage: 18566299208\n hugepages-2Mi: 0\n memory: 3936676Ki\n pods: 110\nSystem Info:\n Machine ID: f573dcf04d6f4a87856a35d266a2fa7a\n System UUID: F573DCF0-4D6F-4A87-856A-35D266A2FA7A\n Boot ID: 8baf4beb-8391-43e6-b17b-b1e184b5370a\n Kernel Version: 4.15.0-52-generic\n OS Image: Ubuntu 18.04.2 LTS\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: docker://18.9.7\n Kubelet Version: v1.15.1\n Kube-Proxy Version: v1.15.1\nPodCIDR: 10.96.1.0/24\nNon-terminated Pods: (3 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system kube-proxy-976zl 0 (0%) 0 (0%) 0 (0%) 0 (0%) 183d\n kube-system weave-net-rlp57 20m (0%) 0 (0%) 0 (0%) 0 (0%) 114d\n kubectl-3856 redis-master-pwltl 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11s\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 20m (0%) 0 (0%)\n memory 0 (0%) 0 (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" Feb 3 13:02:42.598: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-3856' Feb 3 13:02:42.787: INFO: stderr: "" Feb 3 13:02:42.787: INFO: stdout: "Name: kubectl-3856\nLabels: e2e-framework=kubectl\n e2e-run=93fa20d9-20dc-492b-beab-ce2fdbf52e63\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo resource limits.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 3 13:02:42.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3856" for this suite. Feb 3 13:03:04.818: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 3 13:03:04.915: INFO: namespace kubectl-3856 deletion completed in 22.123653128s • [SLOW TEST:33.526 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 3 13:03:04.916: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting the proxy server Feb 3 13:03:04.988: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 3 13:03:05.064: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9567" for this suite. Feb 3 13:03:11.094: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 3 13:03:11.247: INFO: namespace kubectl-9567 deletion completed in 6.173950855s • [SLOW TEST:6.331 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 3 13:03:11.247: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1685 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Feb 3 13:03:11.425: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-1630' Feb 3 13:03:11.560: INFO: stderr: "" Feb 3 13:03:11.560: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod was created [AfterEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1690 Feb 3 13:03:11.582: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-1630' Feb 3 13:03:19.315: INFO: stderr: "" Feb 3 13:03:19.316: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 3 13:03:19.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1630" for this suite. Feb 3 13:03:25.374: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 3 13:03:25.497: INFO: namespace kubectl-1630 deletion completed in 6.146024032s • [SLOW TEST:14.250 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 3 13:03:25.498: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Feb 3 13:03:25.588: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4f334952-4516-476f-81ef-2653a75db5e5" in namespace "projected-5115" to be "success or failure" Feb 3 13:03:25.597: INFO: Pod "downwardapi-volume-4f334952-4516-476f-81ef-2653a75db5e5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.185024ms Feb 3 13:03:27.607: INFO: Pod "downwardapi-volume-4f334952-4516-476f-81ef-2653a75db5e5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018590772s Feb 3 13:03:29.615: INFO: Pod "downwardapi-volume-4f334952-4516-476f-81ef-2653a75db5e5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026479795s Feb 3 13:03:31.624: INFO: Pod "downwardapi-volume-4f334952-4516-476f-81ef-2653a75db5e5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.035733583s Feb 3 13:03:34.279: INFO: Pod "downwardapi-volume-4f334952-4516-476f-81ef-2653a75db5e5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.690271669s Feb 3 13:03:36.290: INFO: Pod "downwardapi-volume-4f334952-4516-476f-81ef-2653a75db5e5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.701868995s STEP: Saw pod success Feb 3 13:03:36.291: INFO: Pod "downwardapi-volume-4f334952-4516-476f-81ef-2653a75db5e5" satisfied condition "success or failure" Feb 3 13:03:36.294: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-4f334952-4516-476f-81ef-2653a75db5e5 container client-container: STEP: delete the pod Feb 3 13:03:36.368: INFO: Waiting for pod downwardapi-volume-4f334952-4516-476f-81ef-2653a75db5e5 to disappear Feb 3 13:03:36.377: INFO: Pod downwardapi-volume-4f334952-4516-476f-81ef-2653a75db5e5 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 3 13:03:36.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5115" for this suite. Feb 3 13:03:42.416: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 3 13:03:42.522: INFO: namespace projected-5115 deletion completed in 6.136109521s • [SLOW TEST:17.024 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 3 13:03:42.523: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:47 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes Feb 3 13:03:50.903: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0' STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Feb 3 13:04:01.081: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 3 13:04:01.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9329" for this suite. Feb 3 13:04:07.219: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 3 13:04:07.351: INFO: namespace pods-9329 deletion completed in 6.25341733s • [SLOW TEST:24.828 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 3 13:04:07.352: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3041.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-3041.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3041.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-3041.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-3041.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-3041.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-3041.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-3041.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-3041.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-3041.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-3041.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-3041.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3041.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 186.29.108.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.108.29.186_udp@PTR;check="$$(dig +tcp +noall +answer +search 186.29.108.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.108.29.186_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3041.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-3041.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3041.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-3041.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-3041.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-3041.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-3041.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-3041.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-3041.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-3041.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-3041.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-3041.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3041.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 186.29.108.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.108.29.186_udp@PTR;check="$$(dig +tcp +noall +answer +search 186.29.108.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.108.29.186_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Feb 3 13:04:19.663: INFO: Unable to read wheezy_udp@dns-test-service.dns-3041.svc.cluster.local from pod dns-3041/dns-test-eebbff44-b008-4834-be20-63ab28a2b7fb: the server could not find the requested resource (get pods dns-test-eebbff44-b008-4834-be20-63ab28a2b7fb) Feb 3 13:04:19.670: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3041.svc.cluster.local from pod dns-3041/dns-test-eebbff44-b008-4834-be20-63ab28a2b7fb: the server could not find the requested resource (get pods dns-test-eebbff44-b008-4834-be20-63ab28a2b7fb) Feb 3 13:04:19.676: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3041.svc.cluster.local from pod dns-3041/dns-test-eebbff44-b008-4834-be20-63ab28a2b7fb: the server could not find the requested resource (get pods dns-test-eebbff44-b008-4834-be20-63ab28a2b7fb) Feb 3 13:04:19.682: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3041.svc.cluster.local from pod dns-3041/dns-test-eebbff44-b008-4834-be20-63ab28a2b7fb: the server could not find the requested resource (get pods dns-test-eebbff44-b008-4834-be20-63ab28a2b7fb) Feb 3 13:04:19.688: INFO: Unable to read wheezy_udp@_http._tcp.test-service-2.dns-3041.svc.cluster.local from pod dns-3041/dns-test-eebbff44-b008-4834-be20-63ab28a2b7fb: the server could not find the requested resource (get pods dns-test-eebbff44-b008-4834-be20-63ab28a2b7fb) Feb 3 13:04:19.695: INFO: Unable to read wheezy_tcp@_http._tcp.test-service-2.dns-3041.svc.cluster.local from pod dns-3041/dns-test-eebbff44-b008-4834-be20-63ab28a2b7fb: the server could not find the requested resource (get pods dns-test-eebbff44-b008-4834-be20-63ab28a2b7fb) Feb 3 13:04:19.700: INFO: Unable to read wheezy_udp@PodARecord from pod dns-3041/dns-test-eebbff44-b008-4834-be20-63ab28a2b7fb: the server could not find the requested resource (get pods dns-test-eebbff44-b008-4834-be20-63ab28a2b7fb) Feb 3 13:04:19.706: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-3041/dns-test-eebbff44-b008-4834-be20-63ab28a2b7fb: the server could not find the requested resource (get pods dns-test-eebbff44-b008-4834-be20-63ab28a2b7fb) Feb 3 13:04:19.711: INFO: Unable to read 10.108.29.186_udp@PTR from pod dns-3041/dns-test-eebbff44-b008-4834-be20-63ab28a2b7fb: the server could not find the requested resource (get pods dns-test-eebbff44-b008-4834-be20-63ab28a2b7fb) Feb 3 13:04:19.716: INFO: Unable to read 10.108.29.186_tcp@PTR from pod dns-3041/dns-test-eebbff44-b008-4834-be20-63ab28a2b7fb: the server could not find the requested resource (get pods dns-test-eebbff44-b008-4834-be20-63ab28a2b7fb) Feb 3 13:04:19.722: INFO: Unable to read jessie_udp@dns-test-service.dns-3041.svc.cluster.local from pod dns-3041/dns-test-eebbff44-b008-4834-be20-63ab28a2b7fb: the server could not find the requested resource (get pods dns-test-eebbff44-b008-4834-be20-63ab28a2b7fb) Feb 3 13:04:19.731: INFO: Unable to read jessie_tcp@dns-test-service.dns-3041.svc.cluster.local from pod dns-3041/dns-test-eebbff44-b008-4834-be20-63ab28a2b7fb: the server could not find the requested resource (get pods dns-test-eebbff44-b008-4834-be20-63ab28a2b7fb) Feb 3 13:04:19.739: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3041.svc.cluster.local from pod dns-3041/dns-test-eebbff44-b008-4834-be20-63ab28a2b7fb: the server could not find the requested resource (get pods dns-test-eebbff44-b008-4834-be20-63ab28a2b7fb) Feb 3 13:04:19.743: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3041.svc.cluster.local from pod dns-3041/dns-test-eebbff44-b008-4834-be20-63ab28a2b7fb: the server could not find the requested resource (get pods dns-test-eebbff44-b008-4834-be20-63ab28a2b7fb) Feb 3 13:04:19.749: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.dns-3041.svc.cluster.local from pod dns-3041/dns-test-eebbff44-b008-4834-be20-63ab28a2b7fb: the server could not find the requested resource (get pods dns-test-eebbff44-b008-4834-be20-63ab28a2b7fb) Feb 3 13:04:19.756: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.dns-3041.svc.cluster.local from pod dns-3041/dns-test-eebbff44-b008-4834-be20-63ab28a2b7fb: the server could not find the requested resource (get pods dns-test-eebbff44-b008-4834-be20-63ab28a2b7fb) Feb 3 13:04:19.762: INFO: Unable to read jessie_udp@PodARecord from pod dns-3041/dns-test-eebbff44-b008-4834-be20-63ab28a2b7fb: the server could not find the requested resource (get pods dns-test-eebbff44-b008-4834-be20-63ab28a2b7fb) Feb 3 13:04:19.768: INFO: Unable to read jessie_tcp@PodARecord from pod dns-3041/dns-test-eebbff44-b008-4834-be20-63ab28a2b7fb: the server could not find the requested resource (get pods dns-test-eebbff44-b008-4834-be20-63ab28a2b7fb) Feb 3 13:04:19.772: INFO: Unable to read 10.108.29.186_udp@PTR from pod dns-3041/dns-test-eebbff44-b008-4834-be20-63ab28a2b7fb: the server could not find the requested resource (get pods dns-test-eebbff44-b008-4834-be20-63ab28a2b7fb) Feb 3 13:04:19.794: INFO: Unable to read 10.108.29.186_tcp@PTR from pod dns-3041/dns-test-eebbff44-b008-4834-be20-63ab28a2b7fb: the server could not find the requested resource (get pods dns-test-eebbff44-b008-4834-be20-63ab28a2b7fb) Feb 3 13:04:19.794: INFO: Lookups using dns-3041/dns-test-eebbff44-b008-4834-be20-63ab28a2b7fb failed for: [wheezy_udp@dns-test-service.dns-3041.svc.cluster.local wheezy_tcp@dns-test-service.dns-3041.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3041.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3041.svc.cluster.local wheezy_udp@_http._tcp.test-service-2.dns-3041.svc.cluster.local wheezy_tcp@_http._tcp.test-service-2.dns-3041.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord 10.108.29.186_udp@PTR 10.108.29.186_tcp@PTR jessie_udp@dns-test-service.dns-3041.svc.cluster.local jessie_tcp@dns-test-service.dns-3041.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3041.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3041.svc.cluster.local jessie_udp@_http._tcp.test-service-2.dns-3041.svc.cluster.local jessie_tcp@_http._tcp.test-service-2.dns-3041.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord 10.108.29.186_udp@PTR 10.108.29.186_tcp@PTR] Feb 3 13:04:24.917: INFO: DNS probes using dns-3041/dns-test-eebbff44-b008-4834-be20-63ab28a2b7fb succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 3 13:04:25.103: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3041" for this suite. Feb 3 13:04:31.211: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 3 13:04:31.301: INFO: namespace dns-3041 deletion completed in 6.122626545s • [SLOW TEST:23.949 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 3 13:04:31.302: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Feb 3 13:04:31.497: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Feb 3 13:04:31.745: INFO: stderr: "" Feb 3 13:04:31.745: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.7\", GitCommit:\"6c143d35bb11d74970e7bc0b6c45b6bfdffc0bd4\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T16:55:20Z\", GoVersion:\"go1.12.14\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.1\", GitCommit:\"4485c6f18cee9a5d3c3b4e523bd27972b1b53892\", GitTreeState:\"clean\", BuildDate:\"2019-07-18T09:09:21Z\", GoVersion:\"go1.12.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 3 13:04:31.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2812" for this suite. Feb 3 13:04:37.801: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 3 13:04:37.920: INFO: namespace kubectl-2812 deletion completed in 6.16397598s • [SLOW TEST:6.619 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl version /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 3 13:04:37.921: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on tmpfs Feb 3 13:04:38.115: INFO: Waiting up to 5m0s for pod "pod-9da464b4-6f16-448c-945a-e3db29f67959" in namespace "emptydir-4077" to be "success or failure" Feb 3 13:04:38.156: INFO: Pod "pod-9da464b4-6f16-448c-945a-e3db29f67959": Phase="Pending", Reason="", readiness=false. Elapsed: 41.205782ms Feb 3 13:04:40.172: INFO: Pod "pod-9da464b4-6f16-448c-945a-e3db29f67959": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056980931s Feb 3 13:04:42.188: INFO: Pod "pod-9da464b4-6f16-448c-945a-e3db29f67959": Phase="Pending", Reason="", readiness=false. Elapsed: 4.072764876s Feb 3 13:04:44.194: INFO: Pod "pod-9da464b4-6f16-448c-945a-e3db29f67959": Phase="Pending", Reason="", readiness=false. Elapsed: 6.078942963s Feb 3 13:04:46.223: INFO: Pod "pod-9da464b4-6f16-448c-945a-e3db29f67959": Phase="Pending", Reason="", readiness=false. Elapsed: 8.107963036s Feb 3 13:04:48.265: INFO: Pod "pod-9da464b4-6f16-448c-945a-e3db29f67959": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.149753632s STEP: Saw pod success Feb 3 13:04:48.265: INFO: Pod "pod-9da464b4-6f16-448c-945a-e3db29f67959" satisfied condition "success or failure" Feb 3 13:04:48.269: INFO: Trying to get logs from node iruya-node pod pod-9da464b4-6f16-448c-945a-e3db29f67959 container test-container: STEP: delete the pod Feb 3 13:04:48.316: INFO: Waiting for pod pod-9da464b4-6f16-448c-945a-e3db29f67959 to disappear Feb 3 13:04:48.327: INFO: Pod pod-9da464b4-6f16-448c-945a-e3db29f67959 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 3 13:04:48.327: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4077" for this suite. Feb 3 13:04:54.351: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 3 13:04:54.460: INFO: namespace emptydir-4077 deletion completed in 6.129562481s • [SLOW TEST:16.540 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 3 13:04:54.461: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 3 13:05:42.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-413" for this suite. Feb 3 13:05:49.080: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 3 13:05:49.184: INFO: namespace namespaces-413 deletion completed in 6.189567236s STEP: Destroying namespace "nsdeletetest-2243" for this suite. Feb 3 13:05:49.186: INFO: Namespace nsdeletetest-2243 was already deleted STEP: Destroying namespace "nsdeletetest-167" for this suite. Feb 3 13:05:55.345: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 3 13:05:55.444: INFO: namespace nsdeletetest-167 deletion completed in 6.257246994s • [SLOW TEST:60.983 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 3 13:05:55.444: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 3 13:06:55.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-742" for this suite. Feb 3 13:07:17.728: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 3 13:07:17.865: INFO: namespace container-probe-742 deletion completed in 22.215876097s • [SLOW TEST:82.421 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 3 13:07:17.866: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-projected-tkx6 STEP: Creating a pod to test atomic-volume-subpath Feb 3 13:07:18.071: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-tkx6" in namespace "subpath-664" to be "success or failure" Feb 3 13:07:18.081: INFO: Pod "pod-subpath-test-projected-tkx6": Phase="Pending", Reason="", readiness=false. Elapsed: 9.646013ms Feb 3 13:07:20.093: INFO: Pod "pod-subpath-test-projected-tkx6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02175576s Feb 3 13:07:22.128: INFO: Pod "pod-subpath-test-projected-tkx6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.057372641s Feb 3 13:07:24.140: INFO: Pod "pod-subpath-test-projected-tkx6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.069109212s Feb 3 13:07:26.155: INFO: Pod "pod-subpath-test-projected-tkx6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.084240576s Feb 3 13:07:28.164: INFO: Pod "pod-subpath-test-projected-tkx6": Phase="Pending", Reason="", readiness=false. Elapsed: 10.093188577s Feb 3 13:07:30.171: INFO: Pod "pod-subpath-test-projected-tkx6": Phase="Running", Reason="", readiness=true. Elapsed: 12.100134673s Feb 3 13:07:32.179: INFO: Pod "pod-subpath-test-projected-tkx6": Phase="Running", Reason="", readiness=true. Elapsed: 14.108113019s Feb 3 13:07:34.194: INFO: Pod "pod-subpath-test-projected-tkx6": Phase="Running", Reason="", readiness=true. Elapsed: 16.12274212s Feb 3 13:07:36.202: INFO: Pod "pod-subpath-test-projected-tkx6": Phase="Running", Reason="", readiness=true. Elapsed: 18.131164355s Feb 3 13:07:38.220: INFO: Pod "pod-subpath-test-projected-tkx6": Phase="Running", Reason="", readiness=true. Elapsed: 20.14873161s Feb 3 13:07:40.228: INFO: Pod "pod-subpath-test-projected-tkx6": Phase="Running", Reason="", readiness=true. Elapsed: 22.156573642s Feb 3 13:07:42.236: INFO: Pod "pod-subpath-test-projected-tkx6": Phase="Running", Reason="", readiness=true. Elapsed: 24.165372187s Feb 3 13:07:44.251: INFO: Pod "pod-subpath-test-projected-tkx6": Phase="Running", Reason="", readiness=true. Elapsed: 26.180218323s Feb 3 13:07:46.272: INFO: Pod "pod-subpath-test-projected-tkx6": Phase="Running", Reason="", readiness=true. Elapsed: 28.201002349s Feb 3 13:07:48.284: INFO: Pod "pod-subpath-test-projected-tkx6": Phase="Running", Reason="", readiness=true. Elapsed: 30.21271495s Feb 3 13:07:50.294: INFO: Pod "pod-subpath-test-projected-tkx6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 32.222753034s STEP: Saw pod success Feb 3 13:07:50.294: INFO: Pod "pod-subpath-test-projected-tkx6" satisfied condition "success or failure" Feb 3 13:07:50.297: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-projected-tkx6 container test-container-subpath-projected-tkx6: STEP: delete the pod Feb 3 13:07:50.733: INFO: Waiting for pod pod-subpath-test-projected-tkx6 to disappear Feb 3 13:07:50.749: INFO: Pod pod-subpath-test-projected-tkx6 no longer exists STEP: Deleting pod pod-subpath-test-projected-tkx6 Feb 3 13:07:50.750: INFO: Deleting pod "pod-subpath-test-projected-tkx6" in namespace "subpath-664" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 3 13:07:50.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-664" for this suite. Feb 3 13:07:56.843: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 3 13:07:56.962: INFO: namespace subpath-664 deletion completed in 6.199556894s • [SLOW TEST:39.096 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 3 13:07:56.963: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-2a9d3e30-8207-4d86-bb7a-24c71f0f26e7 STEP: Creating secret with name s-test-opt-upd-c9b0a115-3417-435f-a50d-391622ccf509 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-2a9d3e30-8207-4d86-bb7a-24c71f0f26e7 STEP: Updating secret s-test-opt-upd-c9b0a115-3417-435f-a50d-391622ccf509 STEP: Creating secret with name s-test-opt-create-a8a08e09-f940-4ce8-b61c-7707e2757217 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 3 13:09:26.938: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2780" for this suite. Feb 3 13:09:48.976: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 3 13:09:49.103: INFO: namespace secrets-2780 deletion completed in 22.158428716s • [SLOW TEST:112.140 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 3 13:09:49.103: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Feb 3 13:09:49.185: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9ab2e6d9-6ad6-48f9-9147-f44e6f82f482" in namespace "projected-694" to be "success or failure" Feb 3 13:09:49.203: INFO: Pod "downwardapi-volume-9ab2e6d9-6ad6-48f9-9147-f44e6f82f482": Phase="Pending", Reason="", readiness=false. Elapsed: 18.239813ms Feb 3 13:09:51.216: INFO: Pod "downwardapi-volume-9ab2e6d9-6ad6-48f9-9147-f44e6f82f482": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031237511s Feb 3 13:09:53.226: INFO: Pod "downwardapi-volume-9ab2e6d9-6ad6-48f9-9147-f44e6f82f482": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040325584s Feb 3 13:09:55.232: INFO: Pod "downwardapi-volume-9ab2e6d9-6ad6-48f9-9147-f44e6f82f482": Phase="Pending", Reason="", readiness=false. Elapsed: 6.046734478s Feb 3 13:09:57.240: INFO: Pod "downwardapi-volume-9ab2e6d9-6ad6-48f9-9147-f44e6f82f482": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.055214703s STEP: Saw pod success Feb 3 13:09:57.240: INFO: Pod "downwardapi-volume-9ab2e6d9-6ad6-48f9-9147-f44e6f82f482" satisfied condition "success or failure" Feb 3 13:09:57.244: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-9ab2e6d9-6ad6-48f9-9147-f44e6f82f482 container client-container: STEP: delete the pod Feb 3 13:09:57.280: INFO: Waiting for pod downwardapi-volume-9ab2e6d9-6ad6-48f9-9147-f44e6f82f482 to disappear Feb 3 13:09:57.283: INFO: Pod downwardapi-volume-9ab2e6d9-6ad6-48f9-9147-f44e6f82f482 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 3 13:09:57.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-694" for this suite. Feb 3 13:10:03.388: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 3 13:10:03.563: INFO: namespace projected-694 deletion completed in 6.276215887s • [SLOW TEST:14.459 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 3 13:10:03.563: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Feb 3 13:10:11.785: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 3 13:10:11.853: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-8740" for this suite. Feb 3 13:10:17.891: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 3 13:10:17.995: INFO: namespace container-runtime-8740 deletion completed in 6.135150043s • [SLOW TEST:14.432 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 3 13:10:17.997: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-28f8680d-8022-4fba-b4e4-89f7c79a1250 in namespace container-probe-4524 Feb 3 13:10:28.125: INFO: Started pod liveness-28f8680d-8022-4fba-b4e4-89f7c79a1250 in namespace container-probe-4524 STEP: checking the pod's current state and verifying that restartCount is present Feb 3 13:10:28.131: INFO: Initial restart count of pod liveness-28f8680d-8022-4fba-b4e4-89f7c79a1250 is 0 Feb 3 13:10:48.868: INFO: Restart count of pod container-probe-4524/liveness-28f8680d-8022-4fba-b4e4-89f7c79a1250 is now 1 (20.737173757s elapsed) Feb 3 13:11:08.988: INFO: Restart count of pod container-probe-4524/liveness-28f8680d-8022-4fba-b4e4-89f7c79a1250 is now 2 (40.857697039s elapsed) Feb 3 13:11:27.801: INFO: Restart count of pod container-probe-4524/liveness-28f8680d-8022-4fba-b4e4-89f7c79a1250 is now 3 (59.670792699s elapsed) Feb 3 13:11:50.043: INFO: Restart count of pod container-probe-4524/liveness-28f8680d-8022-4fba-b4e4-89f7c79a1250 is now 4 (1m21.912874942s elapsed) Feb 3 13:12:48.315: INFO: Restart count of pod container-probe-4524/liveness-28f8680d-8022-4fba-b4e4-89f7c79a1250 is now 5 (2m20.184101105s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 3 13:12:48.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4524" for this suite. Feb 3 13:12:54.444: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 3 13:12:54.574: INFO: namespace container-probe-4524 deletion completed in 6.208739348s • [SLOW TEST:156.578 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 3 13:12:54.576: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Feb 3 13:12:54.686: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 3 13:13:08.061: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-2691" for this suite. Feb 3 13:13:14.182: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 3 13:13:14.307: INFO: namespace init-container-2691 deletion completed in 6.21361473s • [SLOW TEST:19.732 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 3 13:13:14.308: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-bf484e2e-9a70-41b8-8d0c-932860efd906 STEP: Creating a pod to test consume configMaps Feb 3 13:13:14.430: INFO: Waiting up to 5m0s for pod "pod-configmaps-ea515c66-55a9-451b-8462-09ff07ba8e2e" in namespace "configmap-8827" to be "success or failure" Feb 3 13:13:14.435: INFO: Pod "pod-configmaps-ea515c66-55a9-451b-8462-09ff07ba8e2e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.601334ms Feb 3 13:13:16.448: INFO: Pod "pod-configmaps-ea515c66-55a9-451b-8462-09ff07ba8e2e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017334187s Feb 3 13:13:18.486: INFO: Pod "pod-configmaps-ea515c66-55a9-451b-8462-09ff07ba8e2e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.0551944s Feb 3 13:13:20.507: INFO: Pod "pod-configmaps-ea515c66-55a9-451b-8462-09ff07ba8e2e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.076835623s Feb 3 13:13:22.520: INFO: Pod "pod-configmaps-ea515c66-55a9-451b-8462-09ff07ba8e2e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.089868279s STEP: Saw pod success Feb 3 13:13:22.521: INFO: Pod "pod-configmaps-ea515c66-55a9-451b-8462-09ff07ba8e2e" satisfied condition "success or failure" Feb 3 13:13:22.526: INFO: Trying to get logs from node iruya-node pod pod-configmaps-ea515c66-55a9-451b-8462-09ff07ba8e2e container configmap-volume-test: STEP: delete the pod Feb 3 13:13:22.656: INFO: Waiting for pod pod-configmaps-ea515c66-55a9-451b-8462-09ff07ba8e2e to disappear Feb 3 13:13:22.661: INFO: Pod pod-configmaps-ea515c66-55a9-451b-8462-09ff07ba8e2e no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 3 13:13:22.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8827" for this suite. Feb 3 13:13:28.728: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 3 13:13:28.835: INFO: namespace configmap-8827 deletion completed in 6.167639637s • [SLOW TEST:14.527 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 3 13:13:28.835: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0203 13:13:59.600998 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Feb 3 13:13:59.601: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 3 13:13:59.601: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7425" for this suite. Feb 3 13:14:06.422: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 3 13:14:07.522: INFO: namespace gc-7425 deletion completed in 7.914602326s • [SLOW TEST:38.687 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 3 13:14:07.524: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's args Feb 3 13:14:07.759: INFO: Waiting up to 5m0s for pod "var-expansion-36dad983-074a-4281-9d51-51e2591d0c4b" in namespace "var-expansion-8970" to be "success or failure" Feb 3 13:14:07.826: INFO: Pod "var-expansion-36dad983-074a-4281-9d51-51e2591d0c4b": Phase="Pending", Reason="", readiness=false. Elapsed: 67.172045ms Feb 3 13:14:09.837: INFO: Pod "var-expansion-36dad983-074a-4281-9d51-51e2591d0c4b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.077931558s Feb 3 13:14:11.843: INFO: Pod "var-expansion-36dad983-074a-4281-9d51-51e2591d0c4b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.084131512s Feb 3 13:14:13.866: INFO: Pod "var-expansion-36dad983-074a-4281-9d51-51e2591d0c4b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.107180862s Feb 3 13:14:15.884: INFO: Pod "var-expansion-36dad983-074a-4281-9d51-51e2591d0c4b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.12458609s STEP: Saw pod success Feb 3 13:14:15.884: INFO: Pod "var-expansion-36dad983-074a-4281-9d51-51e2591d0c4b" satisfied condition "success or failure" Feb 3 13:14:15.891: INFO: Trying to get logs from node iruya-node pod var-expansion-36dad983-074a-4281-9d51-51e2591d0c4b container dapi-container: STEP: delete the pod Feb 3 13:14:16.040: INFO: Waiting for pod var-expansion-36dad983-074a-4281-9d51-51e2591d0c4b to disappear Feb 3 13:14:16.045: INFO: Pod var-expansion-36dad983-074a-4281-9d51-51e2591d0c4b no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 3 13:14:16.045: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-8970" for this suite. Feb 3 13:14:22.070: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 3 13:14:22.179: INFO: namespace var-expansion-8970 deletion completed in 6.130613187s • [SLOW TEST:14.655 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 3 13:14:22.180: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Feb 3 13:14:29.441: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 3 13:14:30.413: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2380" for this suite. Feb 3 13:14:36.440: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 3 13:14:36.654: INFO: namespace container-runtime-2380 deletion completed in 6.235041024s • [SLOW TEST:14.474 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 3 13:14:36.655: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name secret-emptykey-test-6425f397-f5bf-4a9f-80cf-42af1823295c [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 3 13:14:36.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8443" for this suite. Feb 3 13:14:42.816: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 3 13:14:42.927: INFO: namespace secrets-8443 deletion completed in 6.148891167s • [SLOW TEST:6.273 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 3 13:14:42.928: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap configmap-7203/configmap-test-aa08ce03-1116-4564-be4a-40e340998fc4 STEP: Creating a pod to test consume configMaps Feb 3 13:14:43.042: INFO: Waiting up to 5m0s for pod "pod-configmaps-4d83ace3-9fb5-4c69-be89-097b01444dc8" in namespace "configmap-7203" to be "success or failure" Feb 3 13:14:43.061: INFO: Pod "pod-configmaps-4d83ace3-9fb5-4c69-be89-097b01444dc8": Phase="Pending", Reason="", readiness=false. Elapsed: 18.436997ms Feb 3 13:14:45.092: INFO: Pod "pod-configmaps-4d83ace3-9fb5-4c69-be89-097b01444dc8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049763101s Feb 3 13:14:47.103: INFO: Pod "pod-configmaps-4d83ace3-9fb5-4c69-be89-097b01444dc8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.061169073s Feb 3 13:14:49.147: INFO: Pod "pod-configmaps-4d83ace3-9fb5-4c69-be89-097b01444dc8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.104636311s Feb 3 13:14:51.156: INFO: Pod "pod-configmaps-4d83ace3-9fb5-4c69-be89-097b01444dc8": Phase="Running", Reason="", readiness=true. Elapsed: 8.113476587s Feb 3 13:14:53.167: INFO: Pod "pod-configmaps-4d83ace3-9fb5-4c69-be89-097b01444dc8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.124792156s STEP: Saw pod success Feb 3 13:14:53.167: INFO: Pod "pod-configmaps-4d83ace3-9fb5-4c69-be89-097b01444dc8" satisfied condition "success or failure" Feb 3 13:14:53.173: INFO: Trying to get logs from node iruya-node pod pod-configmaps-4d83ace3-9fb5-4c69-be89-097b01444dc8 container env-test: STEP: delete the pod Feb 3 13:14:53.343: INFO: Waiting for pod pod-configmaps-4d83ace3-9fb5-4c69-be89-097b01444dc8 to disappear Feb 3 13:14:53.464: INFO: Pod pod-configmaps-4d83ace3-9fb5-4c69-be89-097b01444dc8 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 3 13:14:53.465: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7203" for this suite. Feb 3 13:14:59.500: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 3 13:14:59.680: INFO: namespace configmap-7203 deletion completed in 6.20650289s • [SLOW TEST:16.752 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 3 13:14:59.680: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Feb 3 13:14:59.797: INFO: Waiting up to 5m0s for pod "downwardapi-volume-aa481063-1427-4d58-aa84-b6fd5f8f2571" in namespace "downward-api-3414" to be "success or failure" Feb 3 13:14:59.811: INFO: Pod "downwardapi-volume-aa481063-1427-4d58-aa84-b6fd5f8f2571": Phase="Pending", Reason="", readiness=false. Elapsed: 13.85457ms Feb 3 13:15:02.549: INFO: Pod "downwardapi-volume-aa481063-1427-4d58-aa84-b6fd5f8f2571": Phase="Pending", Reason="", readiness=false. Elapsed: 2.751966412s Feb 3 13:15:04.574: INFO: Pod "downwardapi-volume-aa481063-1427-4d58-aa84-b6fd5f8f2571": Phase="Pending", Reason="", readiness=false. Elapsed: 4.777472647s Feb 3 13:15:06.585: INFO: Pod "downwardapi-volume-aa481063-1427-4d58-aa84-b6fd5f8f2571": Phase="Pending", Reason="", readiness=false. Elapsed: 6.788155984s Feb 3 13:15:08.600: INFO: Pod "downwardapi-volume-aa481063-1427-4d58-aa84-b6fd5f8f2571": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.8036686s STEP: Saw pod success Feb 3 13:15:08.601: INFO: Pod "downwardapi-volume-aa481063-1427-4d58-aa84-b6fd5f8f2571" satisfied condition "success or failure" Feb 3 13:15:08.605: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-aa481063-1427-4d58-aa84-b6fd5f8f2571 container client-container: STEP: delete the pod Feb 3 13:15:08.784: INFO: Waiting for pod downwardapi-volume-aa481063-1427-4d58-aa84-b6fd5f8f2571 to disappear Feb 3 13:15:08.790: INFO: Pod downwardapi-volume-aa481063-1427-4d58-aa84-b6fd5f8f2571 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 3 13:15:08.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3414" for this suite. Feb 3 13:15:14.839: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 3 13:15:15.010: INFO: namespace downward-api-3414 deletion completed in 6.214279794s • [SLOW TEST:15.330 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 3 13:15:15.011: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-3462 [It] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating stateful set ss in namespace statefulset-3462 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-3462 Feb 3 13:15:15.125: INFO: Found 0 stateful pods, waiting for 1 Feb 3 13:15:25.136: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Feb 3 13:15:25.142: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3462 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 3 13:15:27.580: INFO: stderr: "I0203 13:15:27.107428 494 log.go:172] (0xc000118d10) (0xc000646820) Create stream\nI0203 13:15:27.107494 494 log.go:172] (0xc000118d10) (0xc000646820) Stream added, broadcasting: 1\nI0203 13:15:27.116826 494 log.go:172] (0xc000118d10) Reply frame received for 1\nI0203 13:15:27.116917 494 log.go:172] (0xc000118d10) (0xc0007340a0) Create stream\nI0203 13:15:27.116938 494 log.go:172] (0xc000118d10) (0xc0007340a0) Stream added, broadcasting: 3\nI0203 13:15:27.119224 494 log.go:172] (0xc000118d10) Reply frame received for 3\nI0203 13:15:27.119280 494 log.go:172] (0xc000118d10) (0xc000590000) Create stream\nI0203 13:15:27.119294 494 log.go:172] (0xc000118d10) (0xc000590000) Stream added, broadcasting: 5\nI0203 13:15:27.120820 494 log.go:172] (0xc000118d10) Reply frame received for 5\nI0203 13:15:27.251010 494 log.go:172] (0xc000118d10) Data frame received for 5\nI0203 13:15:27.251126 494 log.go:172] (0xc000590000) (5) Data frame handling\nI0203 13:15:27.251183 494 log.go:172] (0xc000590000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0203 13:15:27.319903 494 log.go:172] (0xc000118d10) Data frame received for 3\nI0203 13:15:27.320020 494 log.go:172] (0xc0007340a0) (3) Data frame handling\nI0203 13:15:27.320058 494 log.go:172] (0xc0007340a0) (3) Data frame sent\nI0203 13:15:27.552980 494 log.go:172] (0xc000118d10) Data frame received for 1\nI0203 13:15:27.553145 494 log.go:172] (0xc000118d10) (0xc0007340a0) Stream removed, broadcasting: 3\nI0203 13:15:27.553297 494 log.go:172] (0xc000646820) (1) Data frame handling\nI0203 13:15:27.553352 494 log.go:172] (0xc000646820) (1) Data frame sent\nI0203 13:15:27.553383 494 log.go:172] (0xc000118d10) (0xc000646820) Stream removed, broadcasting: 1\nI0203 13:15:27.553909 494 log.go:172] (0xc000118d10) (0xc000590000) Stream removed, broadcasting: 5\nI0203 13:15:27.554585 494 log.go:172] (0xc000118d10) Go away received\nI0203 13:15:27.555303 494 log.go:172] (0xc000118d10) (0xc000646820) Stream removed, broadcasting: 1\nI0203 13:15:27.555398 494 log.go:172] (0xc000118d10) (0xc0007340a0) Stream removed, broadcasting: 3\nI0203 13:15:27.555492 494 log.go:172] (0xc000118d10) (0xc000590000) Stream removed, broadcasting: 5\n" Feb 3 13:15:27.581: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 3 13:15:27.581: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Feb 3 13:15:27.598: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Feb 3 13:15:37.612: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Feb 3 13:15:37.612: INFO: Waiting for statefulset status.replicas updated to 0 Feb 3 13:15:37.641: INFO: POD NODE PHASE GRACE CONDITIONS Feb 3 13:15:37.641: INFO: ss-0 iruya-node Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 13:15:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 13:15:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 13:15:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 13:15:15 +0000 UTC }] Feb 3 13:15:37.642: INFO: Feb 3 13:15:37.642: INFO: StatefulSet ss has not reached scale 3, at 1 Feb 3 13:15:38.665: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.988743215s Feb 3 13:15:39.674: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.96512216s Feb 3 13:15:40.681: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.956171822s Feb 3 13:15:41.689: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.949285893s Feb 3 13:15:43.676: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.942023276s Feb 3 13:15:44.692: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.953946901s Feb 3 13:15:46.069: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.938424749s Feb 3 13:15:47.080: INFO: Verifying statefulset ss doesn't scale past 3 for another 561.65192ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-3462 Feb 3 13:15:48.088: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3462 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 3 13:15:48.749: INFO: stderr: "I0203 13:15:48.322885 526 log.go:172] (0xc0008c8370) (0xc000976640) Create stream\nI0203 13:15:48.323026 526 log.go:172] (0xc0008c8370) (0xc000976640) Stream added, broadcasting: 1\nI0203 13:15:48.330631 526 log.go:172] (0xc0008c8370) Reply frame received for 1\nI0203 13:15:48.330773 526 log.go:172] (0xc0008c8370) (0xc0008d4000) Create stream\nI0203 13:15:48.330794 526 log.go:172] (0xc0008c8370) (0xc0008d4000) Stream added, broadcasting: 3\nI0203 13:15:48.332872 526 log.go:172] (0xc0008c8370) Reply frame received for 3\nI0203 13:15:48.332919 526 log.go:172] (0xc0008c8370) (0xc0008ea000) Create stream\nI0203 13:15:48.332930 526 log.go:172] (0xc0008c8370) (0xc0008ea000) Stream added, broadcasting: 5\nI0203 13:15:48.336090 526 log.go:172] (0xc0008c8370) Reply frame received for 5\nI0203 13:15:48.494143 526 log.go:172] (0xc0008c8370) Data frame received for 5\nI0203 13:15:48.494569 526 log.go:172] (0xc0008ea000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0203 13:15:48.494673 526 log.go:172] (0xc0008c8370) Data frame received for 3\nI0203 13:15:48.494746 526 log.go:172] (0xc0008d4000) (3) Data frame handling\nI0203 13:15:48.494764 526 log.go:172] (0xc0008d4000) (3) Data frame sent\nI0203 13:15:48.494822 526 log.go:172] (0xc0008ea000) (5) Data frame sent\nI0203 13:15:48.729641 526 log.go:172] (0xc0008c8370) (0xc0008d4000) Stream removed, broadcasting: 3\nI0203 13:15:48.729917 526 log.go:172] (0xc0008c8370) Data frame received for 1\nI0203 13:15:48.729942 526 log.go:172] (0xc000976640) (1) Data frame handling\nI0203 13:15:48.729976 526 log.go:172] (0xc000976640) (1) Data frame sent\nI0203 13:15:48.729994 526 log.go:172] (0xc0008c8370) (0xc000976640) Stream removed, broadcasting: 1\nI0203 13:15:48.730963 526 log.go:172] (0xc0008c8370) (0xc0008ea000) Stream removed, broadcasting: 5\nI0203 13:15:48.731040 526 log.go:172] (0xc0008c8370) (0xc000976640) Stream removed, broadcasting: 1\nI0203 13:15:48.731052 526 log.go:172] (0xc0008c8370) (0xc0008d4000) Stream removed, broadcasting: 3\nI0203 13:15:48.731067 526 log.go:172] (0xc0008c8370) (0xc0008ea000) Stream removed, broadcasting: 5\n" Feb 3 13:15:48.750: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Feb 3 13:15:48.750: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Feb 3 13:15:48.750: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3462 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 3 13:15:49.328: INFO: stderr: "I0203 13:15:48.945882 546 log.go:172] (0xc0007546e0) (0xc000564c80) Create stream\nI0203 13:15:48.946018 546 log.go:172] (0xc0007546e0) (0xc000564c80) Stream added, broadcasting: 1\nI0203 13:15:48.952215 546 log.go:172] (0xc0007546e0) Reply frame received for 1\nI0203 13:15:48.952334 546 log.go:172] (0xc0007546e0) (0xc00085a0a0) Create stream\nI0203 13:15:48.952356 546 log.go:172] (0xc0007546e0) (0xc00085a0a0) Stream added, broadcasting: 3\nI0203 13:15:48.953942 546 log.go:172] (0xc0007546e0) Reply frame received for 3\nI0203 13:15:48.953965 546 log.go:172] (0xc0007546e0) (0xc00085a000) Create stream\nI0203 13:15:48.953972 546 log.go:172] (0xc0007546e0) (0xc00085a000) Stream added, broadcasting: 5\nI0203 13:15:48.955443 546 log.go:172] (0xc0007546e0) Reply frame received for 5\nI0203 13:15:49.174793 546 log.go:172] (0xc0007546e0) Data frame received for 5\nI0203 13:15:49.174823 546 log.go:172] (0xc00085a000) (5) Data frame handling\nI0203 13:15:49.174838 546 log.go:172] (0xc00085a000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0203 13:15:49.225423 546 log.go:172] (0xc0007546e0) Data frame received for 3\nI0203 13:15:49.225466 546 log.go:172] (0xc00085a0a0) (3) Data frame handling\nI0203 13:15:49.225473 546 log.go:172] (0xc00085a0a0) (3) Data frame sent\nI0203 13:15:49.225498 546 log.go:172] (0xc0007546e0) Data frame received for 5\nI0203 13:15:49.225505 546 log.go:172] (0xc00085a000) (5) Data frame handling\nI0203 13:15:49.225511 546 log.go:172] (0xc00085a000) (5) Data frame sent\nI0203 13:15:49.225515 546 log.go:172] (0xc0007546e0) Data frame received for 5\nI0203 13:15:49.225520 546 log.go:172] (0xc00085a000) (5) Data frame handling\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0203 13:15:49.225531 546 log.go:172] (0xc00085a000) (5) Data frame sent\nI0203 13:15:49.320181 546 log.go:172] (0xc0007546e0) Data frame received for 1\nI0203 13:15:49.320257 546 log.go:172] (0xc000564c80) (1) Data frame handling\nI0203 13:15:49.320284 546 log.go:172] (0xc000564c80) (1) Data frame sent\nI0203 13:15:49.320307 546 log.go:172] (0xc0007546e0) (0xc000564c80) Stream removed, broadcasting: 1\nI0203 13:15:49.320686 546 log.go:172] (0xc0007546e0) (0xc00085a0a0) Stream removed, broadcasting: 3\nI0203 13:15:49.321148 546 log.go:172] (0xc0007546e0) (0xc00085a000) Stream removed, broadcasting: 5\nI0203 13:15:49.321251 546 log.go:172] (0xc0007546e0) Go away received\nI0203 13:15:49.321416 546 log.go:172] (0xc0007546e0) (0xc000564c80) Stream removed, broadcasting: 1\nI0203 13:15:49.321492 546 log.go:172] (0xc0007546e0) (0xc00085a0a0) Stream removed, broadcasting: 3\nI0203 13:15:49.321506 546 log.go:172] (0xc0007546e0) (0xc00085a000) Stream removed, broadcasting: 5\n" Feb 3 13:15:49.328: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Feb 3 13:15:49.328: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Feb 3 13:15:49.329: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3462 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 3 13:15:49.911: INFO: stderr: "I0203 13:15:49.592188 558 log.go:172] (0xc000a5a420) (0xc00083c8c0) Create stream\nI0203 13:15:49.592487 558 log.go:172] (0xc000a5a420) (0xc00083c8c0) Stream added, broadcasting: 1\nI0203 13:15:49.613737 558 log.go:172] (0xc000a5a420) Reply frame received for 1\nI0203 13:15:49.613807 558 log.go:172] (0xc000a5a420) (0xc0005a41e0) Create stream\nI0203 13:15:49.613821 558 log.go:172] (0xc000a5a420) (0xc0005a41e0) Stream added, broadcasting: 3\nI0203 13:15:49.615693 558 log.go:172] (0xc000a5a420) Reply frame received for 3\nI0203 13:15:49.615740 558 log.go:172] (0xc000a5a420) (0xc00083c000) Create stream\nI0203 13:15:49.615762 558 log.go:172] (0xc000a5a420) (0xc00083c000) Stream added, broadcasting: 5\nI0203 13:15:49.617344 558 log.go:172] (0xc000a5a420) Reply frame received for 5\nI0203 13:15:49.713716 558 log.go:172] (0xc000a5a420) Data frame received for 5\nI0203 13:15:49.714156 558 log.go:172] (0xc00083c000) (5) Data frame handling\nI0203 13:15:49.714190 558 log.go:172] (0xc00083c000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0203 13:15:49.714348 558 log.go:172] (0xc000a5a420) Data frame received for 3\nI0203 13:15:49.714442 558 log.go:172] (0xc0005a41e0) (3) Data frame handling\nI0203 13:15:49.714480 558 log.go:172] (0xc0005a41e0) (3) Data frame sent\nI0203 13:15:49.894129 558 log.go:172] (0xc000a5a420) Data frame received for 1\nI0203 13:15:49.894225 558 log.go:172] (0xc00083c8c0) (1) Data frame handling\nI0203 13:15:49.894283 558 log.go:172] (0xc00083c8c0) (1) Data frame sent\nI0203 13:15:49.895135 558 log.go:172] (0xc000a5a420) (0xc0005a41e0) Stream removed, broadcasting: 3\nI0203 13:15:49.895272 558 log.go:172] (0xc000a5a420) (0xc00083c8c0) Stream removed, broadcasting: 1\nI0203 13:15:49.896138 558 log.go:172] (0xc000a5a420) (0xc00083c000) Stream removed, broadcasting: 5\nI0203 13:15:49.897064 558 log.go:172] (0xc000a5a420) (0xc00083c8c0) Stream removed, broadcasting: 1\nI0203 13:15:49.897151 558 log.go:172] (0xc000a5a420) (0xc0005a41e0) Stream removed, broadcasting: 3\nI0203 13:15:49.897225 558 log.go:172] (0xc000a5a420) (0xc00083c000) Stream removed, broadcasting: 5\n" Feb 3 13:15:49.912: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Feb 3 13:15:49.912: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Feb 3 13:15:49.924: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Feb 3 13:15:49.924: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Feb 3 13:15:49.924: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Feb 3 13:15:49.928: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3462 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 3 13:15:50.465: INFO: stderr: "I0203 13:15:50.140047 579 log.go:172] (0xc000a3e630) (0xc0005fcb40) Create stream\nI0203 13:15:50.140205 579 log.go:172] (0xc000a3e630) (0xc0005fcb40) Stream added, broadcasting: 1\nI0203 13:15:50.145823 579 log.go:172] (0xc000a3e630) Reply frame received for 1\nI0203 13:15:50.145927 579 log.go:172] (0xc000a3e630) (0xc000a3c000) Create stream\nI0203 13:15:50.145948 579 log.go:172] (0xc000a3e630) (0xc000a3c000) Stream added, broadcasting: 3\nI0203 13:15:50.148968 579 log.go:172] (0xc000a3e630) Reply frame received for 3\nI0203 13:15:50.149079 579 log.go:172] (0xc000a3e630) (0xc0007d6000) Create stream\nI0203 13:15:50.149134 579 log.go:172] (0xc000a3e630) (0xc0007d6000) Stream added, broadcasting: 5\nI0203 13:15:50.151095 579 log.go:172] (0xc000a3e630) Reply frame received for 5\nI0203 13:15:50.254878 579 log.go:172] (0xc000a3e630) Data frame received for 5\nI0203 13:15:50.254974 579 log.go:172] (0xc0007d6000) (5) Data frame handling\nI0203 13:15:50.254995 579 log.go:172] (0xc0007d6000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0203 13:15:50.255079 579 log.go:172] (0xc000a3e630) Data frame received for 3\nI0203 13:15:50.255103 579 log.go:172] (0xc000a3c000) (3) Data frame handling\nI0203 13:15:50.255123 579 log.go:172] (0xc000a3c000) (3) Data frame sent\nI0203 13:15:50.436161 579 log.go:172] (0xc000a3e630) Data frame received for 1\nI0203 13:15:50.436331 579 log.go:172] (0xc000a3e630) (0xc0007d6000) Stream removed, broadcasting: 5\nI0203 13:15:50.436466 579 log.go:172] (0xc0005fcb40) (1) Data frame handling\nI0203 13:15:50.436520 579 log.go:172] (0xc0005fcb40) (1) Data frame sent\nI0203 13:15:50.436838 579 log.go:172] (0xc000a3e630) (0xc000a3c000) Stream removed, broadcasting: 3\nI0203 13:15:50.437051 579 log.go:172] (0xc000a3e630) (0xc0005fcb40) Stream removed, broadcasting: 1\nI0203 13:15:50.437194 579 log.go:172] (0xc000a3e630) Go away received\nI0203 13:15:50.448868 579 log.go:172] (0xc000a3e630) (0xc0005fcb40) Stream removed, broadcasting: 1\nI0203 13:15:50.449001 579 log.go:172] (0xc000a3e630) (0xc000a3c000) Stream removed, broadcasting: 3\nI0203 13:15:50.449070 579 log.go:172] (0xc000a3e630) (0xc0007d6000) Stream removed, broadcasting: 5\n" Feb 3 13:15:50.465: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 3 13:15:50.465: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Feb 3 13:15:50.466: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3462 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 3 13:15:50.826: INFO: stderr: "I0203 13:15:50.615952 600 log.go:172] (0xc0007544d0) (0xc000831720) Create stream\nI0203 13:15:50.616205 600 log.go:172] (0xc0007544d0) (0xc000831720) Stream added, broadcasting: 1\nI0203 13:15:50.622089 600 log.go:172] (0xc0007544d0) Reply frame received for 1\nI0203 13:15:50.622138 600 log.go:172] (0xc0007544d0) (0xc0004f4000) Create stream\nI0203 13:15:50.622150 600 log.go:172] (0xc0007544d0) (0xc0004f4000) Stream added, broadcasting: 3\nI0203 13:15:50.623030 600 log.go:172] (0xc0007544d0) Reply frame received for 3\nI0203 13:15:50.623061 600 log.go:172] (0xc0007544d0) (0xc000830f00) Create stream\nI0203 13:15:50.623080 600 log.go:172] (0xc0007544d0) (0xc000830f00) Stream added, broadcasting: 5\nI0203 13:15:50.624219 600 log.go:172] (0xc0007544d0) Reply frame received for 5\nI0203 13:15:50.725375 600 log.go:172] (0xc0007544d0) Data frame received for 5\nI0203 13:15:50.725483 600 log.go:172] (0xc000830f00) (5) Data frame handling\nI0203 13:15:50.725511 600 log.go:172] (0xc000830f00) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0203 13:15:50.748578 600 log.go:172] (0xc0007544d0) Data frame received for 3\nI0203 13:15:50.748770 600 log.go:172] (0xc0004f4000) (3) Data frame handling\nI0203 13:15:50.748784 600 log.go:172] (0xc0004f4000) (3) Data frame sent\nI0203 13:15:50.818961 600 log.go:172] (0xc0007544d0) (0xc0004f4000) Stream removed, broadcasting: 3\nI0203 13:15:50.819055 600 log.go:172] (0xc0007544d0) Data frame received for 1\nI0203 13:15:50.819069 600 log.go:172] (0xc000831720) (1) Data frame handling\nI0203 13:15:50.819079 600 log.go:172] (0xc000831720) (1) Data frame sent\nI0203 13:15:50.819086 600 log.go:172] (0xc0007544d0) (0xc000831720) Stream removed, broadcasting: 1\nI0203 13:15:50.819345 600 log.go:172] (0xc0007544d0) (0xc000830f00) Stream removed, broadcasting: 5\nI0203 13:15:50.819371 600 log.go:172] (0xc0007544d0) (0xc000831720) Stream removed, broadcasting: 1\nI0203 13:15:50.819380 600 log.go:172] (0xc0007544d0) (0xc0004f4000) Stream removed, broadcasting: 3\nI0203 13:15:50.819385 600 log.go:172] (0xc0007544d0) (0xc000830f00) Stream removed, broadcasting: 5\n" Feb 3 13:15:50.826: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 3 13:15:50.826: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Feb 3 13:15:50.826: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3462 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 3 13:15:51.437: INFO: stderr: "I0203 13:15:51.007860 613 log.go:172] (0xc00098a0b0) (0xc0008ae140) Create stream\nI0203 13:15:51.008024 613 log.go:172] (0xc00098a0b0) (0xc0008ae140) Stream added, broadcasting: 1\nI0203 13:15:51.015119 613 log.go:172] (0xc00098a0b0) Reply frame received for 1\nI0203 13:15:51.015247 613 log.go:172] (0xc00098a0b0) (0xc000938000) Create stream\nI0203 13:15:51.015283 613 log.go:172] (0xc00098a0b0) (0xc000938000) Stream added, broadcasting: 3\nI0203 13:15:51.020124 613 log.go:172] (0xc00098a0b0) Reply frame received for 3\nI0203 13:15:51.020184 613 log.go:172] (0xc00098a0b0) (0xc0008dc000) Create stream\nI0203 13:15:51.020208 613 log.go:172] (0xc00098a0b0) (0xc0008dc000) Stream added, broadcasting: 5\nI0203 13:15:51.023365 613 log.go:172] (0xc00098a0b0) Reply frame received for 5\nI0203 13:15:51.137297 613 log.go:172] (0xc00098a0b0) Data frame received for 5\nI0203 13:15:51.137364 613 log.go:172] (0xc0008dc000) (5) Data frame handling\nI0203 13:15:51.137384 613 log.go:172] (0xc0008dc000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0203 13:15:51.252247 613 log.go:172] (0xc00098a0b0) Data frame received for 3\nI0203 13:15:51.252312 613 log.go:172] (0xc000938000) (3) Data frame handling\nI0203 13:15:51.252347 613 log.go:172] (0xc000938000) (3) Data frame sent\nI0203 13:15:51.419835 613 log.go:172] (0xc00098a0b0) (0xc000938000) Stream removed, broadcasting: 3\nI0203 13:15:51.420215 613 log.go:172] (0xc00098a0b0) Data frame received for 1\nI0203 13:15:51.420262 613 log.go:172] (0xc0008ae140) (1) Data frame handling\nI0203 13:15:51.420298 613 log.go:172] (0xc0008ae140) (1) Data frame sent\nI0203 13:15:51.420320 613 log.go:172] (0xc00098a0b0) (0xc0008ae140) Stream removed, broadcasting: 1\nI0203 13:15:51.421048 613 log.go:172] (0xc00098a0b0) (0xc0008dc000) Stream removed, broadcasting: 5\nI0203 13:15:51.421556 613 log.go:172] (0xc00098a0b0) Go away received\nI0203 13:15:51.421679 613 log.go:172] (0xc00098a0b0) (0xc0008ae140) Stream removed, broadcasting: 1\nI0203 13:15:51.421704 613 log.go:172] (0xc00098a0b0) (0xc000938000) Stream removed, broadcasting: 3\nI0203 13:15:51.421709 613 log.go:172] (0xc00098a0b0) (0xc0008dc000) Stream removed, broadcasting: 5\n" Feb 3 13:15:51.438: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 3 13:15:51.438: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Feb 3 13:15:51.438: INFO: Waiting for statefulset status.replicas updated to 0 Feb 3 13:15:51.470: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Feb 3 13:16:01.529: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Feb 3 13:16:01.529: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Feb 3 13:16:01.529: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Feb 3 13:16:01.567: INFO: POD NODE PHASE GRACE CONDITIONS Feb 3 13:16:01.567: INFO: ss-0 iruya-node Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 13:15:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 13:15:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 13:15:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 13:15:15 +0000 UTC }] Feb 3 13:16:01.567: INFO: ss-1 iruya-server-sfge57q7djm7 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 13:15:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 13:15:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 13:15:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 13:15:37 +0000 UTC }] Feb 3 13:16:01.567: INFO: ss-2 iruya-node Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 13:15:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 13:15:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 13:15:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 13:15:37 +0000 UTC }] Feb 3 13:16:01.567: INFO: Feb 3 13:16:01.567: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 3 13:16:03.191: INFO: POD NODE PHASE GRACE CONDITIONS Feb 3 13:16:03.191: INFO: ss-0 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 13:15:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 13:15:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 13:15:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 13:15:15 +0000 UTC }] Feb 3 13:16:03.191: INFO: ss-1 iruya-server-sfge57q7djm7 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 13:15:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 13:15:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 13:15:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 13:15:37 +0000 UTC }] Feb 3 13:16:03.191: INFO: ss-2 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 13:15:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 13:15:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 13:15:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 13:15:37 +0000 UTC }] Feb 3 13:16:03.191: INFO: Feb 3 13:16:03.191: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 3 13:16:04.224: INFO: POD NODE PHASE GRACE CONDITIONS Feb 3 13:16:04.224: INFO: ss-0 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 13:15:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 13:15:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 13:15:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 13:15:15 +0000 UTC }] Feb 3 13:16:04.224: INFO: ss-1 iruya-server-sfge57q7djm7 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 13:15:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 13:15:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 13:15:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 13:15:37 +0000 UTC }] Feb 3 13:16:04.224: INFO: ss-2 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 13:15:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 13:15:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 13:15:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 13:15:37 +0000 UTC }] Feb 3 13:16:04.224: INFO: Feb 3 13:16:04.224: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 3 13:16:05.476: INFO: POD NODE PHASE GRACE CONDITIONS Feb 3 13:16:05.476: INFO: ss-0 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 13:15:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 13:15:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 13:15:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 13:15:15 +0000 UTC }] Feb 3 13:16:05.476: INFO: ss-1 iruya-server-sfge57q7djm7 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 13:15:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 13:15:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 13:15:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 13:15:37 +0000 UTC }] Feb 3 13:16:05.476: INFO: ss-2 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 13:15:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 13:15:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 13:15:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 13:15:37 +0000 UTC }] Feb 3 13:16:05.476: INFO: Feb 3 13:16:05.476: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 3 13:16:06.895: INFO: POD NODE PHASE GRACE CONDITIONS Feb 3 13:16:06.895: INFO: ss-0 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 13:15:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 13:15:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 13:15:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 13:15:15 +0000 UTC }] Feb 3 13:16:06.895: INFO: ss-1 iruya-server-sfge57q7djm7 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 13:15:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 13:15:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 13:15:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 13:15:37 +0000 UTC }] Feb 3 13:16:06.895: INFO: ss-2 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 13:15:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 13:15:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 13:15:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 13:15:37 +0000 UTC }] Feb 3 13:16:06.895: INFO: Feb 3 13:16:06.895: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 3 13:16:07.913: INFO: POD NODE PHASE GRACE CONDITIONS Feb 3 13:16:07.914: INFO: ss-0 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 13:15:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 13:15:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 13:15:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 13:15:15 +0000 UTC }] Feb 3 13:16:07.914: INFO: ss-1 iruya-server-sfge57q7djm7 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 13:15:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 13:15:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 13:15:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 13:15:37 +0000 UTC }] Feb 3 13:16:07.914: INFO: ss-2 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 13:15:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 13:15:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 13:15:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 13:15:37 +0000 UTC }] Feb 3 13:16:07.914: INFO: Feb 3 13:16:07.914: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 3 13:16:08.936: INFO: POD NODE PHASE GRACE CONDITIONS Feb 3 13:16:08.936: INFO: ss-0 iruya-node Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 13:15:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 13:15:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 13:15:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 13:15:15 +0000 UTC }] Feb 3 13:16:08.936: INFO: ss-2 iruya-node Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 13:15:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 13:15:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 13:15:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 13:15:37 +0000 UTC }] Feb 3 13:16:08.936: INFO: Feb 3 13:16:08.936: INFO: StatefulSet ss has not reached scale 0, at 2 Feb 3 13:16:09.950: INFO: POD NODE PHASE GRACE CONDITIONS Feb 3 13:16:09.950: INFO: ss-0 iruya-node Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 13:15:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 13:15:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 13:15:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 13:15:15 +0000 UTC }] Feb 3 13:16:09.950: INFO: ss-2 iruya-node Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 13:15:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 13:15:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 13:15:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 13:15:37 +0000 UTC }] Feb 3 13:16:09.950: INFO: Feb 3 13:16:09.950: INFO: StatefulSet ss has not reached scale 0, at 2 Feb 3 13:16:10.961: INFO: POD NODE PHASE GRACE CONDITIONS Feb 3 13:16:10.961: INFO: ss-0 iruya-node Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 13:15:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 13:15:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 13:15:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 13:15:15 +0000 UTC }] Feb 3 13:16:10.961: INFO: ss-2 iruya-node Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 13:15:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 13:15:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 13:15:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 13:15:37 +0000 UTC }] Feb 3 13:16:10.961: INFO: Feb 3 13:16:10.961: INFO: StatefulSet ss has not reached scale 0, at 2 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-3462 Feb 3 13:16:11.977: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3462 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 3 13:16:12.225: INFO: rc: 1 Feb 3 13:16:12.226: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3462 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] error: unable to upgrade connection: container not found ("nginx") [] 0xc000f86f30 exit status 1 true [0xc00095bf60 0xc002386008 0xc002386028] [0xc00095bf60 0xc002386008 0xc002386028] [0xc00095bfb0 0xc002386020] [0xba6c50 0xba6c50] 0xc001fd0f00 }: Command stdout: stderr: error: unable to upgrade connection: container not found ("nginx") error: exit status 1 Feb 3 13:16:22.226: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3462 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 3 13:16:22.375: INFO: rc: 1 Feb 3 13:16:22.376: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3462 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00129c000 exit status 1 true [0xc0006fd450 0xc0006fd5a0 0xc0006fd648] [0xc0006fd450 0xc0006fd5a0 0xc0006fd648] [0xc0006fd530 0xc0006fd628] [0xba6c50 0xba6c50] 0xc00272dbc0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 3 13:16:32.377: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3462 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 3 13:16:32.582: INFO: rc: 1 Feb 3 13:16:32.582: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3462 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00129c0c0 exit status 1 true [0xc0006fd6d0 0xc0006fd830 0xc0006fd8f0] [0xc0006fd6d0 0xc0006fd830 0xc0006fd8f0] [0xc0006fd7b0 0xc0006fd8c0] [0xba6c50 0xba6c50] 0xc00272df80 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 3 13:16:42.584: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3462 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 3 13:16:42.757: INFO: rc: 1 Feb 3 13:16:42.757: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3462 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000f87050 exit status 1 true [0xc002386030 0xc002386048 0xc002386060] [0xc002386030 0xc002386048 0xc002386060] [0xc002386040 0xc002386058] [0xba6c50 0xba6c50] 0xc001fd1380 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 3 13:16:52.758: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3462 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 3 13:16:52.931: INFO: rc: 1 Feb 3 13:16:52.932: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3462 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000f87140 exit status 1 true [0xc002386068 0xc002386080 0xc002386098] [0xc002386068 0xc002386080 0xc002386098] [0xc002386078 0xc002386090] [0xba6c50 0xba6c50] 0xc001fd1680 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 3 13:17:02.932: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3462 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 3 13:17:03.136: INFO: rc: 1 Feb 3 13:17:03.136: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3462 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001d42c30 exit status 1 true [0xc000c479d0 0xc000c47b08 0xc000c47bc8] [0xc000c479d0 0xc000c47b08 0xc000c47bc8] [0xc000c47ac0 0xc000c47b60] [0xba6c50 0xba6c50] 0xc001a6baa0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 3 13:17:13.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3462 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 3 13:17:13.340: INFO: rc: 1 Feb 3 13:17:13.340: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3462 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001d42d20 exit status 1 true [0xc000c47c30 0xc000c47c70 0xc000c47d90] [0xc000c47c30 0xc000c47c70 0xc000c47d90] [0xc000c47c60 0xc000c47d30] [0xba6c50 0xba6c50] 0xc001a6bf80 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 3 13:17:23.341: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3462 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 3 13:17:23.546: INFO: rc: 1 Feb 3 13:17:23.547: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3462 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0022fcc90 exit status 1 true [0xc0001861f0 0xc00095a138 0xc00095a240] [0xc0001861f0 0xc00095a138 0xc00095a240] [0xc00095a078 0xc00095a178] [0xba6c50 0xba6c50] 0xc001e1d800 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 3 13:17:33.548: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3462 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 3 13:17:33.747: INFO: rc: 1 Feb 3 13:17:33.747: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3462 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00254e090 exit status 1 true [0xc002386008 0xc002386028 0xc002386040] [0xc002386008 0xc002386028 0xc002386040] [0xc002386020 0xc002386038] [0xba6c50 0xba6c50] 0xc001a6a4e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 3 13:17:43.749: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3462 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 3 13:17:43.886: INFO: rc: 1 Feb 3 13:17:43.887: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3462 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00254e150 exit status 1 true [0xc002386048 0xc002386060 0xc002386078] [0xc002386048 0xc002386060 0xc002386078] [0xc002386058 0xc002386070] [0xba6c50 0xba6c50] 0xc001a6a840 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 3 13:17:53.888: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3462 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 3 13:17:54.015: INFO: rc: 1 Feb 3 13:17:54.016: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3462 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00255c0c0 exit status 1 true [0xc000c46060 0xc000c461b0 0xc000c46370] [0xc000c46060 0xc000c461b0 0xc000c46370] [0xc000c46188 0xc000c46270] [0xba6c50 0xba6c50] 0xc00272c480 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 3 13:18:04.017: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3462 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 3 13:18:04.119: INFO: rc: 1 Feb 3 13:18:04.120: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3462 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00254e240 exit status 1 true [0xc002386080 0xc002386098 0xc0023860b0] [0xc002386080 0xc002386098 0xc0023860b0] [0xc002386090 0xc0023860a8] [0xba6c50 0xba6c50] 0xc001a6b1a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 3 13:18:14.120: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3462 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 3 13:18:14.254: INFO: rc: 1 Feb 3 13:18:14.254: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3462 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001e7c0f0 exit status 1 true [0xc0006fcb28 0xc0006fcc10 0xc0006fcd68] [0xc0006fcb28 0xc0006fcc10 0xc0006fcd68] [0xc0006fcbf0 0xc0006fcc50] [0xba6c50 0xba6c50] 0xc0022288a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 3 13:18:24.255: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3462 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 3 13:18:24.377: INFO: rc: 1 Feb 3 13:18:24.377: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3462 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00254e330 exit status 1 true [0xc0023860b8 0xc0023860d8 0xc0023860f0] [0xc0023860b8 0xc0023860d8 0xc0023860f0] [0xc0023860d0 0xc0023860e8] [0xba6c50 0xba6c50] 0xc001a6bbc0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 3 13:18:34.379: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3462 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 3 13:18:34.611: INFO: rc: 1 Feb 3 13:18:34.611: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3462 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00255c180 exit status 1 true [0xc000c464a0 0xc000c46668 0xc000c46778] [0xc000c464a0 0xc000c46668 0xc000c46778] [0xc000c465c0 0xc000c466f8] [0xba6c50 0xba6c50] 0xc00272c780 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 3 13:18:44.613: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3462 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 3 13:18:44.766: INFO: rc: 1 Feb 3 13:18:44.766: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3462 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001e7c1b0 exit status 1 true [0xc0006fcdc8 0xc0006fcf10 0xc0006fd030] [0xc0006fcdc8 0xc0006fcf10 0xc0006fd030] [0xc0006fcea8 0xc0006fcfa8] [0xba6c50 0xba6c50] 0xc0022293e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 3 13:18:54.768: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3462 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 3 13:18:54.931: INFO: rc: 1 Feb 3 13:18:54.932: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3462 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001e7c2a0 exit status 1 true [0xc0006fd0b0 0xc0006fd138 0xc0006fd190] [0xc0006fd0b0 0xc0006fd138 0xc0006fd190] [0xc0006fd118 0xc0006fd160] [0xba6c50 0xba6c50] 0xc0026fa060 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 3 13:19:04.933: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3462 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 3 13:19:05.086: INFO: rc: 1 Feb 3 13:19:05.086: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3462 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0022fcdb0 exit status 1 true [0xc00095a2e8 0xc00095a5d0 0xc00095ac40] [0xc00095a2e8 0xc00095a5d0 0xc00095ac40] [0xc00095a518 0xc00095ab78] [0xba6c50 0xba6c50] 0xc0023e0a20 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 3 13:19:15.087: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3462 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 3 13:19:15.301: INFO: rc: 1 Feb 3 13:19:15.301: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3462 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00255c2a0 exit status 1 true [0xc000c46890 0xc000c46ae0 0xc000c46c28] [0xc000c46890 0xc000c46ae0 0xc000c46c28] [0xc000c469b8 0xc000c46bb8] [0xba6c50 0xba6c50] 0xc00272cc00 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 3 13:19:25.302: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3462 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 3 13:19:25.523: INFO: rc: 1 Feb 3 13:19:25.523: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3462 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001e7c060 exit status 1 true [0xc000186000 0xc0006fcbb8 0xc0006fcc20] [0xc000186000 0xc0006fcbb8 0xc0006fcc20] [0xc0006fcb28 0xc0006fcc10] [0xba6c50 0xba6c50] 0xc0022288a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 3 13:19:35.524: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3462 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 3 13:19:35.680: INFO: rc: 1 Feb 3 13:19:35.681: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3462 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0022fcd20 exit status 1 true [0xc00095a078 0xc00095a178 0xc00095a498] [0xc00095a078 0xc00095a178 0xc00095a498] [0xc00095a148 0xc00095a2e8] [0xba6c50 0xba6c50] 0xc001e1d800 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 3 13:19:45.682: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3462 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 3 13:19:45.884: INFO: rc: 1 Feb 3 13:19:45.884: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3462 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001e7c150 exit status 1 true [0xc0006fcc50 0xc0006fce80 0xc0006fcf70] [0xc0006fcc50 0xc0006fce80 0xc0006fcf70] [0xc0006fcdc8 0xc0006fcf10] [0xba6c50 0xba6c50] 0xc0022293e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 3 13:19:55.885: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3462 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 3 13:19:56.048: INFO: rc: 1 Feb 3 13:19:56.049: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3462 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0022fce10 exit status 1 true [0xc00095a518 0xc00095ab78 0xc00095ae08] [0xc00095a518 0xc00095ab78 0xc00095ae08] [0xc00095a5f8 0xc00095ad50] [0xba6c50 0xba6c50] 0xc0026fa540 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 3 13:20:06.050: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3462 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 3 13:20:06.190: INFO: rc: 1 Feb 3 13:20:06.191: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3462 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00254e0f0 exit status 1 true [0xc002386008 0xc002386028 0xc002386040] [0xc002386008 0xc002386028 0xc002386040] [0xc002386020 0xc002386038] [0xba6c50 0xba6c50] 0xc0023e0a20 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 3 13:20:16.192: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3462 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 3 13:20:16.403: INFO: rc: 1 Feb 3 13:20:16.404: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3462 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00254e210 exit status 1 true [0xc002386048 0xc002386060 0xc002386078] [0xc002386048 0xc002386060 0xc002386078] [0xc002386058 0xc002386070] [0xba6c50 0xba6c50] 0xc0023e1e60 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 3 13:20:26.404: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3462 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 3 13:20:26.607: INFO: rc: 1 Feb 3 13:20:26.608: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3462 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00254e360 exit status 1 true [0xc002386080 0xc002386098 0xc0023860b0] [0xc002386080 0xc002386098 0xc0023860b0] [0xc002386090 0xc0023860a8] [0xba6c50 0xba6c50] 0xc001a6a540 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 3 13:20:36.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3462 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 3 13:20:36.741: INFO: rc: 1 Feb 3 13:20:36.741: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3462 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001e7c270 exit status 1 true [0xc0006fcfa8 0xc0006fd0e0 0xc0006fd148] [0xc0006fcfa8 0xc0006fd0e0 0xc0006fd148] [0xc0006fd0b0 0xc0006fd138] [0xba6c50 0xba6c50] 0xc00272c000 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 3 13:20:46.742: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3462 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 3 13:20:46.943: INFO: rc: 1 Feb 3 13:20:46.943: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3462 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001e7c360 exit status 1 true [0xc0006fd160 0xc0006fd250 0xc0006fd368] [0xc0006fd160 0xc0006fd250 0xc0006fd368] [0xc0006fd230 0xc0006fd328] [0xba6c50 0xba6c50] 0xc00272c540 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 3 13:20:56.943: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3462 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 3 13:20:57.106: INFO: rc: 1 Feb 3 13:20:57.106: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3462 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0022fcf30 exit status 1 true [0xc00095ae38 0xc00095aeb0 0xc00095afb8] [0xc00095ae38 0xc00095aeb0 0xc00095afb8] [0xc00095ae90 0xc00095af88] [0xba6c50 0xba6c50] 0xc0026faa80 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 3 13:21:07.107: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3462 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 3 13:21:07.243: INFO: rc: 1 Feb 3 13:21:07.243: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3462 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0022fd020 exit status 1 true [0xc00095afd8 0xc00095b078 0xc00095b128] [0xc00095afd8 0xc00095b078 0xc00095b128] [0xc00095b050 0xc00095b0b0] [0xba6c50 0xba6c50] 0xc0026fb020 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 3 13:21:17.245: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3462 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 3 13:21:17.416: INFO: rc: 1 Feb 3 13:21:17.416: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: Feb 3 13:21:17.416: INFO: Scaling statefulset ss to 0 Feb 3 13:21:17.434: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Feb 3 13:21:17.438: INFO: Deleting all statefulset in ns statefulset-3462 Feb 3 13:21:17.442: INFO: Scaling statefulset ss to 0 Feb 3 13:21:17.454: INFO: Waiting for statefulset status.replicas updated to 0 Feb 3 13:21:17.458: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 3 13:21:17.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3462" for this suite. Feb 3 13:21:23.593: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 3 13:21:23.726: INFO: namespace statefulset-3462 deletion completed in 6.178357567s • [SLOW TEST:368.715 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 3 13:21:23.727: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with configMap that has name projected-configmap-test-upd-471ca4b0-8ca7-4d8c-9370-d447c5f956f2 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-471ca4b0-8ca7-4d8c-9370-d447c5f956f2 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 3 13:21:36.025: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6797" for this suite. Feb 3 13:21:58.083: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 3 13:21:58.200: INFO: namespace projected-6797 deletion completed in 22.167309461s • [SLOW TEST:34.474 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 3 13:21:58.200: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-768beda9-1689-45af-a3d5-1c420ad50300 STEP: Creating a pod to test consume secrets Feb 3 13:21:58.348: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-d5980fb2-3688-497e-be3a-e39f48dc0024" in namespace "projected-6909" to be "success or failure" Feb 3 13:21:58.383: INFO: Pod "pod-projected-secrets-d5980fb2-3688-497e-be3a-e39f48dc0024": Phase="Pending", Reason="", readiness=false. Elapsed: 35.021088ms Feb 3 13:22:00.392: INFO: Pod "pod-projected-secrets-d5980fb2-3688-497e-be3a-e39f48dc0024": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043778948s Feb 3 13:22:02.399: INFO: Pod "pod-projected-secrets-d5980fb2-3688-497e-be3a-e39f48dc0024": Phase="Pending", Reason="", readiness=false. Elapsed: 4.051211983s Feb 3 13:22:04.406: INFO: Pod "pod-projected-secrets-d5980fb2-3688-497e-be3a-e39f48dc0024": Phase="Pending", Reason="", readiness=false. Elapsed: 6.057990519s Feb 3 13:22:06.419: INFO: Pod "pod-projected-secrets-d5980fb2-3688-497e-be3a-e39f48dc0024": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.070704995s STEP: Saw pod success Feb 3 13:22:06.419: INFO: Pod "pod-projected-secrets-d5980fb2-3688-497e-be3a-e39f48dc0024" satisfied condition "success or failure" Feb 3 13:22:06.426: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-d5980fb2-3688-497e-be3a-e39f48dc0024 container projected-secret-volume-test: STEP: delete the pod Feb 3 13:22:06.505: INFO: Waiting for pod pod-projected-secrets-d5980fb2-3688-497e-be3a-e39f48dc0024 to disappear Feb 3 13:22:06.514: INFO: Pod pod-projected-secrets-d5980fb2-3688-497e-be3a-e39f48dc0024 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 3 13:22:06.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6909" for this suite. Feb 3 13:22:12.581: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 3 13:22:12.707: INFO: namespace projected-6909 deletion completed in 6.188308298s • [SLOW TEST:14.507 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 3 13:22:12.708: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Feb 3 13:22:12.794: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Feb 3 13:22:12.805: INFO: Waiting for terminating namespaces to be deleted... Feb 3 13:22:12.808: INFO: Logging pods the kubelet thinks is on node iruya-node before test Feb 3 13:22:12.821: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded) Feb 3 13:22:12.821: INFO: Container weave ready: true, restart count 0 Feb 3 13:22:12.821: INFO: Container weave-npc ready: true, restart count 0 Feb 3 13:22:12.821: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded) Feb 3 13:22:12.821: INFO: Container kube-proxy ready: true, restart count 0 Feb 3 13:22:12.821: INFO: Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test Feb 3 13:22:12.839: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded) Feb 3 13:22:12.839: INFO: Container kube-scheduler ready: true, restart count 13 Feb 3 13:22:12.839: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded) Feb 3 13:22:12.839: INFO: Container coredns ready: true, restart count 0 Feb 3 13:22:12.839: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded) Feb 3 13:22:12.839: INFO: Container etcd ready: true, restart count 0 Feb 3 13:22:12.839: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded) Feb 3 13:22:12.839: INFO: Container weave ready: true, restart count 0 Feb 3 13:22:12.839: INFO: Container weave-npc ready: true, restart count 0 Feb 3 13:22:12.839: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded) Feb 3 13:22:12.839: INFO: Container coredns ready: true, restart count 0 Feb 3 13:22:12.839: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded) Feb 3 13:22:12.839: INFO: Container kube-controller-manager ready: true, restart count 19 Feb 3 13:22:12.839: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded) Feb 3 13:22:12.839: INFO: Container kube-proxy ready: true, restart count 0 Feb 3 13:22:12.839: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded) Feb 3 13:22:12.839: INFO: Container kube-apiserver ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.15efe72343b556f5], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 3 13:22:13.905: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-167" for this suite. Feb 3 13:22:19.951: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 3 13:22:20.083: INFO: namespace sched-pred-167 deletion completed in 6.165027673s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:7.375 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 3 13:22:20.083: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override all Feb 3 13:22:20.185: INFO: Waiting up to 5m0s for pod "client-containers-6b17b6c3-fde9-432d-9291-54085f6f765c" in namespace "containers-4298" to be "success or failure" Feb 3 13:22:20.192: INFO: Pod "client-containers-6b17b6c3-fde9-432d-9291-54085f6f765c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.658574ms Feb 3 13:22:22.200: INFO: Pod "client-containers-6b17b6c3-fde9-432d-9291-54085f6f765c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014906757s Feb 3 13:22:24.218: INFO: Pod "client-containers-6b17b6c3-fde9-432d-9291-54085f6f765c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033259913s Feb 3 13:22:26.226: INFO: Pod "client-containers-6b17b6c3-fde9-432d-9291-54085f6f765c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.040559745s Feb 3 13:22:28.233: INFO: Pod "client-containers-6b17b6c3-fde9-432d-9291-54085f6f765c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.047680606s Feb 3 13:22:30.242: INFO: Pod "client-containers-6b17b6c3-fde9-432d-9291-54085f6f765c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.05657204s STEP: Saw pod success Feb 3 13:22:30.242: INFO: Pod "client-containers-6b17b6c3-fde9-432d-9291-54085f6f765c" satisfied condition "success or failure" Feb 3 13:22:30.246: INFO: Trying to get logs from node iruya-node pod client-containers-6b17b6c3-fde9-432d-9291-54085f6f765c container test-container: STEP: delete the pod Feb 3 13:22:30.386: INFO: Waiting for pod client-containers-6b17b6c3-fde9-432d-9291-54085f6f765c to disappear Feb 3 13:22:30.399: INFO: Pod client-containers-6b17b6c3-fde9-432d-9291-54085f6f765c no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 3 13:22:30.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-4298" for this suite. Feb 3 13:22:36.481: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 3 13:22:36.642: INFO: namespace containers-4298 deletion completed in 6.234838896s • [SLOW TEST:16.558 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 3 13:22:36.643: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Feb 3 13:22:43.848: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 3 13:22:43.915: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-1810" for this suite. Feb 3 13:22:50.007: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 3 13:22:50.140: INFO: namespace container-runtime-1810 deletion completed in 6.177294887s • [SLOW TEST:13.497 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 3 13:22:50.140: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-8743 STEP: creating a selector STEP: Creating the service pods in kubernetes Feb 3 13:22:50.223: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Feb 3 13:23:31.040: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-8743 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 3 13:23:31.040: INFO: >>> kubeConfig: /root/.kube/config I0203 13:23:31.134343 8 log.go:172] (0xc00094a370) (0xc001474b40) Create stream I0203 13:23:31.134489 8 log.go:172] (0xc00094a370) (0xc001474b40) Stream added, broadcasting: 1 I0203 13:23:31.143552 8 log.go:172] (0xc00094a370) Reply frame received for 1 I0203 13:23:31.143603 8 log.go:172] (0xc00094a370) (0xc000db8000) Create stream I0203 13:23:31.143616 8 log.go:172] (0xc00094a370) (0xc000db8000) Stream added, broadcasting: 3 I0203 13:23:31.146466 8 log.go:172] (0xc00094a370) Reply frame received for 3 I0203 13:23:31.146591 8 log.go:172] (0xc00094a370) (0xc0012825a0) Create stream I0203 13:23:31.146607 8 log.go:172] (0xc00094a370) (0xc0012825a0) Stream added, broadcasting: 5 I0203 13:23:31.148844 8 log.go:172] (0xc00094a370) Reply frame received for 5 I0203 13:23:31.267827 8 log.go:172] (0xc00094a370) Data frame received for 3 I0203 13:23:31.267882 8 log.go:172] (0xc000db8000) (3) Data frame handling I0203 13:23:31.267906 8 log.go:172] (0xc000db8000) (3) Data frame sent I0203 13:23:31.399700 8 log.go:172] (0xc00094a370) Data frame received for 1 I0203 13:23:31.400004 8 log.go:172] (0xc00094a370) (0xc000db8000) Stream removed, broadcasting: 3 I0203 13:23:31.400227 8 log.go:172] (0xc001474b40) (1) Data frame handling I0203 13:23:31.400284 8 log.go:172] (0xc001474b40) (1) Data frame sent I0203 13:23:31.400324 8 log.go:172] (0xc00094a370) (0xc0012825a0) Stream removed, broadcasting: 5 I0203 13:23:31.400383 8 log.go:172] (0xc00094a370) (0xc001474b40) Stream removed, broadcasting: 1 I0203 13:23:31.400424 8 log.go:172] (0xc00094a370) Go away received I0203 13:23:31.402196 8 log.go:172] (0xc00094a370) (0xc001474b40) Stream removed, broadcasting: 1 I0203 13:23:31.402382 8 log.go:172] (0xc00094a370) (0xc000db8000) Stream removed, broadcasting: 3 I0203 13:23:31.402397 8 log.go:172] (0xc00094a370) (0xc0012825a0) Stream removed, broadcasting: 5 Feb 3 13:23:31.402: INFO: Found all expected endpoints: [netserver-0] Feb 3 13:23:31.411: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.44.0.1:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-8743 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 3 13:23:31.411: INFO: >>> kubeConfig: /root/.kube/config I0203 13:23:31.485063 8 log.go:172] (0xc00094b1e0) (0xc001475540) Create stream I0203 13:23:31.485114 8 log.go:172] (0xc00094b1e0) (0xc001475540) Stream added, broadcasting: 1 I0203 13:23:31.491397 8 log.go:172] (0xc00094b1e0) Reply frame received for 1 I0203 13:23:31.491434 8 log.go:172] (0xc00094b1e0) (0xc000db8140) Create stream I0203 13:23:31.491458 8 log.go:172] (0xc00094b1e0) (0xc000db8140) Stream added, broadcasting: 3 I0203 13:23:31.495687 8 log.go:172] (0xc00094b1e0) Reply frame received for 3 I0203 13:23:31.495733 8 log.go:172] (0xc00094b1e0) (0xc00308e000) Create stream I0203 13:23:31.495761 8 log.go:172] (0xc00094b1e0) (0xc00308e000) Stream added, broadcasting: 5 I0203 13:23:31.498019 8 log.go:172] (0xc00094b1e0) Reply frame received for 5 I0203 13:23:31.618752 8 log.go:172] (0xc00094b1e0) Data frame received for 3 I0203 13:23:31.618803 8 log.go:172] (0xc000db8140) (3) Data frame handling I0203 13:23:31.618834 8 log.go:172] (0xc000db8140) (3) Data frame sent I0203 13:23:31.786995 8 log.go:172] (0xc00094b1e0) Data frame received for 1 I0203 13:23:31.787268 8 log.go:172] (0xc00094b1e0) (0xc000db8140) Stream removed, broadcasting: 3 I0203 13:23:31.787390 8 log.go:172] (0xc001475540) (1) Data frame handling I0203 13:23:31.787484 8 log.go:172] (0xc001475540) (1) Data frame sent I0203 13:23:31.787763 8 log.go:172] (0xc00094b1e0) (0xc00308e000) Stream removed, broadcasting: 5 I0203 13:23:31.787816 8 log.go:172] (0xc00094b1e0) (0xc001475540) Stream removed, broadcasting: 1 I0203 13:23:31.787856 8 log.go:172] (0xc00094b1e0) Go away received I0203 13:23:31.788103 8 log.go:172] (0xc00094b1e0) (0xc001475540) Stream removed, broadcasting: 1 I0203 13:23:31.788136 8 log.go:172] (0xc00094b1e0) (0xc000db8140) Stream removed, broadcasting: 3 I0203 13:23:31.788156 8 log.go:172] (0xc00094b1e0) (0xc00308e000) Stream removed, broadcasting: 5 Feb 3 13:23:31.788: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 3 13:23:31.788: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-8743" for this suite. Feb 3 13:23:53.836: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 3 13:23:53.965: INFO: namespace pod-network-test-8743 deletion completed in 22.162736101s • [SLOW TEST:63.824 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 3 13:23:53.965: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Feb 3 13:24:10.148: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 3 13:24:10.155: INFO: Pod pod-with-prestop-http-hook still exists Feb 3 13:24:12.156: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 3 13:24:12.194: INFO: Pod pod-with-prestop-http-hook still exists Feb 3 13:24:14.156: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 3 13:24:14.165: INFO: Pod pod-with-prestop-http-hook still exists Feb 3 13:24:16.156: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 3 13:24:16.167: INFO: Pod pod-with-prestop-http-hook still exists Feb 3 13:24:18.156: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 3 13:24:18.166: INFO: Pod pod-with-prestop-http-hook still exists Feb 3 13:24:20.156: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 3 13:24:20.168: INFO: Pod pod-with-prestop-http-hook still exists Feb 3 13:24:22.156: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 3 13:24:22.177: INFO: Pod pod-with-prestop-http-hook still exists Feb 3 13:24:24.156: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 3 13:24:24.162: INFO: Pod pod-with-prestop-http-hook still exists Feb 3 13:24:26.156: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 3 13:24:26.163: INFO: Pod pod-with-prestop-http-hook still exists Feb 3 13:24:28.156: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 3 13:24:28.195: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 3 13:24:28.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-6294" for this suite. Feb 3 13:24:52.255: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 3 13:24:52.428: INFO: namespace container-lifecycle-hook-6294 deletion completed in 24.199130909s • [SLOW TEST:58.462 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 3 13:24:52.428: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Feb 3 13:25:01.163: INFO: Successfully updated pod "annotationupdate9b88acb3-39a0-49d2-a862-b91c79076985" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 3 13:25:03.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8458" for this suite. Feb 3 13:25:25.323: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 3 13:25:25.473: INFO: namespace projected-8458 deletion completed in 22.216675728s • [SLOW TEST:33.045 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 3 13:25:25.474: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Feb 3 13:25:25.596: INFO: Waiting up to 5m0s for pod "downward-api-b1e5c6a4-c763-4135-a6e2-f346fcb62d4f" in namespace "downward-api-5506" to be "success or failure" Feb 3 13:25:25.602: INFO: Pod "downward-api-b1e5c6a4-c763-4135-a6e2-f346fcb62d4f": Phase="Pending", Reason="", readiness=false. Elapsed: 5.300912ms Feb 3 13:25:27.610: INFO: Pod "downward-api-b1e5c6a4-c763-4135-a6e2-f346fcb62d4f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013879766s Feb 3 13:25:29.618: INFO: Pod "downward-api-b1e5c6a4-c763-4135-a6e2-f346fcb62d4f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.021633514s Feb 3 13:25:31.626: INFO: Pod "downward-api-b1e5c6a4-c763-4135-a6e2-f346fcb62d4f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.029126083s Feb 3 13:25:33.641: INFO: Pod "downward-api-b1e5c6a4-c763-4135-a6e2-f346fcb62d4f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.044648503s STEP: Saw pod success Feb 3 13:25:33.641: INFO: Pod "downward-api-b1e5c6a4-c763-4135-a6e2-f346fcb62d4f" satisfied condition "success or failure" Feb 3 13:25:33.646: INFO: Trying to get logs from node iruya-node pod downward-api-b1e5c6a4-c763-4135-a6e2-f346fcb62d4f container dapi-container: STEP: delete the pod Feb 3 13:25:33.827: INFO: Waiting for pod downward-api-b1e5c6a4-c763-4135-a6e2-f346fcb62d4f to disappear Feb 3 13:25:33.845: INFO: Pod downward-api-b1e5c6a4-c763-4135-a6e2-f346fcb62d4f no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 3 13:25:33.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5506" for this suite. Feb 3 13:25:39.884: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 3 13:25:40.107: INFO: namespace downward-api-5506 deletion completed in 6.252161328s • [SLOW TEST:14.634 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 3 13:25:40.108: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-gcph STEP: Creating a pod to test atomic-volume-subpath Feb 3 13:25:40.230: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-gcph" in namespace "subpath-1533" to be "success or failure" Feb 3 13:25:40.236: INFO: Pod "pod-subpath-test-configmap-gcph": Phase="Pending", Reason="", readiness=false. Elapsed: 5.881141ms Feb 3 13:25:42.245: INFO: Pod "pod-subpath-test-configmap-gcph": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014885019s Feb 3 13:25:44.257: INFO: Pod "pod-subpath-test-configmap-gcph": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026570725s Feb 3 13:25:46.267: INFO: Pod "pod-subpath-test-configmap-gcph": Phase="Pending", Reason="", readiness=false. Elapsed: 6.036531176s Feb 3 13:25:48.281: INFO: Pod "pod-subpath-test-configmap-gcph": Phase="Running", Reason="", readiness=true. Elapsed: 8.050500498s Feb 3 13:25:50.290: INFO: Pod "pod-subpath-test-configmap-gcph": Phase="Running", Reason="", readiness=true. Elapsed: 10.05978421s Feb 3 13:25:52.298: INFO: Pod "pod-subpath-test-configmap-gcph": Phase="Running", Reason="", readiness=true. Elapsed: 12.067630523s Feb 3 13:25:54.309: INFO: Pod "pod-subpath-test-configmap-gcph": Phase="Running", Reason="", readiness=true. Elapsed: 14.078525797s Feb 3 13:25:56.320: INFO: Pod "pod-subpath-test-configmap-gcph": Phase="Running", Reason="", readiness=true. Elapsed: 16.089860561s Feb 3 13:25:58.332: INFO: Pod "pod-subpath-test-configmap-gcph": Phase="Running", Reason="", readiness=true. Elapsed: 18.10147039s Feb 3 13:26:00.340: INFO: Pod "pod-subpath-test-configmap-gcph": Phase="Running", Reason="", readiness=true. Elapsed: 20.11006863s Feb 3 13:26:02.355: INFO: Pod "pod-subpath-test-configmap-gcph": Phase="Running", Reason="", readiness=true. Elapsed: 22.12431704s Feb 3 13:26:04.370: INFO: Pod "pod-subpath-test-configmap-gcph": Phase="Running", Reason="", readiness=true. Elapsed: 24.139849405s Feb 3 13:26:06.382: INFO: Pod "pod-subpath-test-configmap-gcph": Phase="Running", Reason="", readiness=true. Elapsed: 26.151280736s Feb 3 13:26:08.397: INFO: Pod "pod-subpath-test-configmap-gcph": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.166105141s STEP: Saw pod success Feb 3 13:26:08.397: INFO: Pod "pod-subpath-test-configmap-gcph" satisfied condition "success or failure" Feb 3 13:26:08.401: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-configmap-gcph container test-container-subpath-configmap-gcph: STEP: delete the pod Feb 3 13:26:08.477: INFO: Waiting for pod pod-subpath-test-configmap-gcph to disappear Feb 3 13:26:08.515: INFO: Pod pod-subpath-test-configmap-gcph no longer exists STEP: Deleting pod pod-subpath-test-configmap-gcph Feb 3 13:26:08.515: INFO: Deleting pod "pod-subpath-test-configmap-gcph" in namespace "subpath-1533" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 3 13:26:08.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-1533" for this suite. Feb 3 13:26:14.563: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 3 13:26:14.711: INFO: namespace subpath-1533 deletion completed in 6.183402708s • [SLOW TEST:34.604 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 3 13:26:14.713: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 3 13:26:21.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-8236" for this suite. Feb 3 13:26:27.212: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 3 13:26:27.368: INFO: namespace namespaces-8236 deletion completed in 6.192193726s STEP: Destroying namespace "nsdeletetest-8948" for this suite. Feb 3 13:26:27.374: INFO: Namespace nsdeletetest-8948 was already deleted STEP: Destroying namespace "nsdeletetest-793" for this suite. Feb 3 13:26:33.407: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 3 13:26:33.661: INFO: namespace nsdeletetest-793 deletion completed in 6.286649174s • [SLOW TEST:18.949 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 3 13:26:33.662: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Feb 3 13:26:33.771: INFO: Waiting up to 5m0s for pod "downward-api-cc10bff5-9eee-4898-ab85-23066f0a3b7f" in namespace "downward-api-3164" to be "success or failure" Feb 3 13:26:33.788: INFO: Pod "downward-api-cc10bff5-9eee-4898-ab85-23066f0a3b7f": Phase="Pending", Reason="", readiness=false. Elapsed: 16.61966ms Feb 3 13:26:35.797: INFO: Pod "downward-api-cc10bff5-9eee-4898-ab85-23066f0a3b7f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026161531s Feb 3 13:26:37.809: INFO: Pod "downward-api-cc10bff5-9eee-4898-ab85-23066f0a3b7f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03773432s Feb 3 13:26:39.822: INFO: Pod "downward-api-cc10bff5-9eee-4898-ab85-23066f0a3b7f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.050314111s Feb 3 13:26:41.851: INFO: Pod "downward-api-cc10bff5-9eee-4898-ab85-23066f0a3b7f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.080296692s STEP: Saw pod success Feb 3 13:26:41.852: INFO: Pod "downward-api-cc10bff5-9eee-4898-ab85-23066f0a3b7f" satisfied condition "success or failure" Feb 3 13:26:41.857: INFO: Trying to get logs from node iruya-node pod downward-api-cc10bff5-9eee-4898-ab85-23066f0a3b7f container dapi-container: STEP: delete the pod Feb 3 13:26:41.927: INFO: Waiting for pod downward-api-cc10bff5-9eee-4898-ab85-23066f0a3b7f to disappear Feb 3 13:26:41.931: INFO: Pod downward-api-cc10bff5-9eee-4898-ab85-23066f0a3b7f no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 3 13:26:41.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3164" for this suite. Feb 3 13:26:48.011: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 3 13:26:48.124: INFO: namespace downward-api-3164 deletion completed in 6.180717575s • [SLOW TEST:14.462 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 3 13:26:48.124: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Feb 3 13:26:56.376: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 3 13:26:56.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9685" for this suite. Feb 3 13:27:02.514: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 3 13:27:02.646: INFO: namespace container-runtime-9685 deletion completed in 6.169602318s • [SLOW TEST:14.522 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 3 13:27:02.647: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-ef11f298-e3f4-40a6-af13-cd2f51796086 STEP: Creating a pod to test consume configMaps Feb 3 13:27:02.781: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e96aa1ac-8ade-4c26-9932-7d1d369ec0b7" in namespace "projected-223" to be "success or failure" Feb 3 13:27:02.795: INFO: Pod "pod-projected-configmaps-e96aa1ac-8ade-4c26-9932-7d1d369ec0b7": Phase="Pending", Reason="", readiness=false. Elapsed: 14.318623ms Feb 3 13:27:04.837: INFO: Pod "pod-projected-configmaps-e96aa1ac-8ade-4c26-9932-7d1d369ec0b7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055950474s Feb 3 13:27:06.845: INFO: Pod "pod-projected-configmaps-e96aa1ac-8ade-4c26-9932-7d1d369ec0b7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.064698547s Feb 3 13:27:08.855: INFO: Pod "pod-projected-configmaps-e96aa1ac-8ade-4c26-9932-7d1d369ec0b7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.074082427s Feb 3 13:27:10.868: INFO: Pod "pod-projected-configmaps-e96aa1ac-8ade-4c26-9932-7d1d369ec0b7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.087835009s STEP: Saw pod success Feb 3 13:27:10.869: INFO: Pod "pod-projected-configmaps-e96aa1ac-8ade-4c26-9932-7d1d369ec0b7" satisfied condition "success or failure" Feb 3 13:27:10.883: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-e96aa1ac-8ade-4c26-9932-7d1d369ec0b7 container projected-configmap-volume-test: STEP: delete the pod Feb 3 13:27:11.155: INFO: Waiting for pod pod-projected-configmaps-e96aa1ac-8ade-4c26-9932-7d1d369ec0b7 to disappear Feb 3 13:27:11.212: INFO: Pod pod-projected-configmaps-e96aa1ac-8ade-4c26-9932-7d1d369ec0b7 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 3 13:27:11.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-223" for this suite. Feb 3 13:27:17.268: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 3 13:27:17.393: INFO: namespace projected-223 deletion completed in 6.169590077s • [SLOW TEST:14.746 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 3 13:27:17.393: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-downwardapi-649f STEP: Creating a pod to test atomic-volume-subpath Feb 3 13:27:17.498: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-649f" in namespace "subpath-2528" to be "success or failure" Feb 3 13:27:17.550: INFO: Pod "pod-subpath-test-downwardapi-649f": Phase="Pending", Reason="", readiness=false. Elapsed: 52.406402ms Feb 3 13:27:20.163: INFO: Pod "pod-subpath-test-downwardapi-649f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.665213487s Feb 3 13:27:22.181: INFO: Pod "pod-subpath-test-downwardapi-649f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.683224577s Feb 3 13:27:24.194: INFO: Pod "pod-subpath-test-downwardapi-649f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.696215374s Feb 3 13:27:26.238: INFO: Pod "pod-subpath-test-downwardapi-649f": Phase="Running", Reason="", readiness=true. Elapsed: 8.740468657s Feb 3 13:27:28.248: INFO: Pod "pod-subpath-test-downwardapi-649f": Phase="Running", Reason="", readiness=true. Elapsed: 10.749953811s Feb 3 13:27:30.261: INFO: Pod "pod-subpath-test-downwardapi-649f": Phase="Running", Reason="", readiness=true. Elapsed: 12.763658097s Feb 3 13:27:32.268: INFO: Pod "pod-subpath-test-downwardapi-649f": Phase="Running", Reason="", readiness=true. Elapsed: 14.769773662s Feb 3 13:27:34.280: INFO: Pod "pod-subpath-test-downwardapi-649f": Phase="Running", Reason="", readiness=true. Elapsed: 16.78245446s Feb 3 13:27:36.290: INFO: Pod "pod-subpath-test-downwardapi-649f": Phase="Running", Reason="", readiness=true. Elapsed: 18.792174839s Feb 3 13:27:38.300: INFO: Pod "pod-subpath-test-downwardapi-649f": Phase="Running", Reason="", readiness=true. Elapsed: 20.801930501s Feb 3 13:27:40.313: INFO: Pod "pod-subpath-test-downwardapi-649f": Phase="Running", Reason="", readiness=true. Elapsed: 22.814827863s Feb 3 13:27:42.322: INFO: Pod "pod-subpath-test-downwardapi-649f": Phase="Running", Reason="", readiness=true. Elapsed: 24.824590329s Feb 3 13:27:44.336: INFO: Pod "pod-subpath-test-downwardapi-649f": Phase="Running", Reason="", readiness=true. Elapsed: 26.838672397s Feb 3 13:27:46.346: INFO: Pod "pod-subpath-test-downwardapi-649f": Phase="Running", Reason="", readiness=true. Elapsed: 28.848466354s Feb 3 13:27:48.353: INFO: Pod "pod-subpath-test-downwardapi-649f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.855673391s STEP: Saw pod success Feb 3 13:27:48.354: INFO: Pod "pod-subpath-test-downwardapi-649f" satisfied condition "success or failure" Feb 3 13:27:48.357: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-downwardapi-649f container test-container-subpath-downwardapi-649f: STEP: delete the pod Feb 3 13:27:48.830: INFO: Waiting for pod pod-subpath-test-downwardapi-649f to disappear Feb 3 13:27:48.842: INFO: Pod pod-subpath-test-downwardapi-649f no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-649f Feb 3 13:27:48.842: INFO: Deleting pod "pod-subpath-test-downwardapi-649f" in namespace "subpath-2528" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 3 13:27:48.905: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-2528" for this suite. Feb 3 13:27:54.945: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 3 13:27:55.049: INFO: namespace subpath-2528 deletion completed in 6.136194417s • [SLOW TEST:37.656 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 3 13:27:55.050: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on tmpfs Feb 3 13:27:55.229: INFO: Waiting up to 5m0s for pod "pod-b41ff596-842e-4c4e-8b02-23953b6962f2" in namespace "emptydir-7759" to be "success or failure" Feb 3 13:27:55.274: INFO: Pod "pod-b41ff596-842e-4c4e-8b02-23953b6962f2": Phase="Pending", Reason="", readiness=false. Elapsed: 44.34741ms Feb 3 13:27:57.278: INFO: Pod "pod-b41ff596-842e-4c4e-8b02-23953b6962f2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0488224s Feb 3 13:27:59.285: INFO: Pod "pod-b41ff596-842e-4c4e-8b02-23953b6962f2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.05558996s Feb 3 13:28:01.292: INFO: Pod "pod-b41ff596-842e-4c4e-8b02-23953b6962f2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.061976339s Feb 3 13:28:03.297: INFO: Pod "pod-b41ff596-842e-4c4e-8b02-23953b6962f2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.067675991s STEP: Saw pod success Feb 3 13:28:03.297: INFO: Pod "pod-b41ff596-842e-4c4e-8b02-23953b6962f2" satisfied condition "success or failure" Feb 3 13:28:03.401: INFO: Trying to get logs from node iruya-node pod pod-b41ff596-842e-4c4e-8b02-23953b6962f2 container test-container: STEP: delete the pod Feb 3 13:28:03.508: INFO: Waiting for pod pod-b41ff596-842e-4c4e-8b02-23953b6962f2 to disappear Feb 3 13:28:03.534: INFO: Pod pod-b41ff596-842e-4c4e-8b02-23953b6962f2 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 3 13:28:03.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7759" for this suite. Feb 3 13:28:09.573: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 3 13:28:09.799: INFO: namespace emptydir-7759 deletion completed in 6.256323616s • [SLOW TEST:14.749 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 3 13:28:09.799: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on node default medium Feb 3 13:28:09.927: INFO: Waiting up to 5m0s for pod "pod-2a1b6522-d453-4440-8f3d-d30835fec0d7" in namespace "emptydir-3400" to be "success or failure" Feb 3 13:28:09.946: INFO: Pod "pod-2a1b6522-d453-4440-8f3d-d30835fec0d7": Phase="Pending", Reason="", readiness=false. Elapsed: 18.973804ms Feb 3 13:28:11.954: INFO: Pod "pod-2a1b6522-d453-4440-8f3d-d30835fec0d7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026985534s Feb 3 13:28:13.968: INFO: Pod "pod-2a1b6522-d453-4440-8f3d-d30835fec0d7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041313608s Feb 3 13:28:15.990: INFO: Pod "pod-2a1b6522-d453-4440-8f3d-d30835fec0d7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.062832604s Feb 3 13:28:17.999: INFO: Pod "pod-2a1b6522-d453-4440-8f3d-d30835fec0d7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.072078746s STEP: Saw pod success Feb 3 13:28:17.999: INFO: Pod "pod-2a1b6522-d453-4440-8f3d-d30835fec0d7" satisfied condition "success or failure" Feb 3 13:28:18.004: INFO: Trying to get logs from node iruya-node pod pod-2a1b6522-d453-4440-8f3d-d30835fec0d7 container test-container: STEP: delete the pod Feb 3 13:28:18.068: INFO: Waiting for pod pod-2a1b6522-d453-4440-8f3d-d30835fec0d7 to disappear Feb 3 13:28:18.072: INFO: Pod pod-2a1b6522-d453-4440-8f3d-d30835fec0d7 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 3 13:28:18.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3400" for this suite. Feb 3 13:28:24.099: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 3 13:28:24.192: INFO: namespace emptydir-3400 deletion completed in 6.114847236s • [SLOW TEST:14.393 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 3 13:28:24.193: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Feb 3 13:28:24.345: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Feb 3 13:28:29.352: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Feb 3 13:28:31.364: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Feb 3 13:28:31.421: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:deployment-9591,SelfLink:/apis/apps/v1/namespaces/deployment-9591/deployments/test-cleanup-deployment,UID:674c9d1d-4589-422a-92d6-59df77c4994b,ResourceVersion:22942090,Generation:1,CreationTimestamp:2020-02-03 13:28:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},} Feb 3 13:28:31.431: INFO: New ReplicaSet "test-cleanup-deployment-55bbcbc84c" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c,GenerateName:,Namespace:deployment-9591,SelfLink:/apis/apps/v1/namespaces/deployment-9591/replicasets/test-cleanup-deployment-55bbcbc84c,UID:448ed90d-2de1-4135-9944-234f7f4530ba,ResourceVersion:22942092,Generation:1,CreationTimestamp:2020-02-03 13:28:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 674c9d1d-4589-422a-92d6-59df77c4994b 0xc002646617 0xc002646618}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Feb 3 13:28:31.431: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Feb 3 13:28:31.431: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:deployment-9591,SelfLink:/apis/apps/v1/namespaces/deployment-9591/replicasets/test-cleanup-controller,UID:58a83f59-34eb-433a-bc46-664ef3c652e1,ResourceVersion:22942091,Generation:1,CreationTimestamp:2020-02-03 13:28:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 674c9d1d-4589-422a-92d6-59df77c4994b 0xc002646547 0xc002646548}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Feb 3 13:28:31.533: INFO: Pod "test-cleanup-controller-qkhgf" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-qkhgf,GenerateName:test-cleanup-controller-,Namespace:deployment-9591,SelfLink:/api/v1/namespaces/deployment-9591/pods/test-cleanup-controller-qkhgf,UID:4ae0ed9d-24ff-4130-ac29-9de4dcc206b6,ResourceVersion:22942088,Generation:0,CreationTimestamp:2020-02-03 13:28:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller 58a83f59-34eb-433a-bc46-664ef3c652e1 0xc001e73b17 0xc001e73b18}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kdxc5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kdxc5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-kdxc5 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001e73b90} {node.kubernetes.io/unreachable Exists NoExecute 0xc001e73bb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 13:28:24 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 13:28:31 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 13:28:31 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 13:28:24 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2020-02-03 13:28:24 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-03 13:28:30 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://386a42252a44c319aa5b8fc38e0f555d0d53ed188d1b11817e106a39dfd50057}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 3 13:28:31.534: INFO: Pod "test-cleanup-deployment-55bbcbc84c-ps9xs" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c-ps9xs,GenerateName:test-cleanup-deployment-55bbcbc84c-,Namespace:deployment-9591,SelfLink:/api/v1/namespaces/deployment-9591/pods/test-cleanup-deployment-55bbcbc84c-ps9xs,UID:58bee185-dbb5-4aec-b88f-8cc80374f009,ResourceVersion:22942097,Generation:0,CreationTimestamp:2020-02-03 13:28:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-deployment-55bbcbc84c 448ed90d-2de1-4135-9944-234f7f4530ba 0xc001e73c97 0xc001e73c98}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kdxc5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kdxc5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-kdxc5 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001e73d10} {node.kubernetes.io/unreachable Exists NoExecute 0xc001e73d30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 13:28:31 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 3 13:28:31.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-9591" for this suite. Feb 3 13:28:38.355: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 3 13:28:38.457: INFO: namespace deployment-9591 deletion completed in 6.874147344s • [SLOW TEST:14.265 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 3 13:28:38.460: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Feb 3 13:28:38.631: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bf73ca50-c8e5-4a83-a7d1-401a25a553bf" in namespace "downward-api-7905" to be "success or failure" Feb 3 13:28:38.649: INFO: Pod "downwardapi-volume-bf73ca50-c8e5-4a83-a7d1-401a25a553bf": Phase="Pending", Reason="", readiness=false. Elapsed: 17.248073ms Feb 3 13:28:40.656: INFO: Pod "downwardapi-volume-bf73ca50-c8e5-4a83-a7d1-401a25a553bf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02388255s Feb 3 13:28:42.669: INFO: Pod "downwardapi-volume-bf73ca50-c8e5-4a83-a7d1-401a25a553bf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03686804s Feb 3 13:28:44.688: INFO: Pod "downwardapi-volume-bf73ca50-c8e5-4a83-a7d1-401a25a553bf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.055757875s Feb 3 13:28:46.696: INFO: Pod "downwardapi-volume-bf73ca50-c8e5-4a83-a7d1-401a25a553bf": Phase="Pending", Reason="", readiness=false. Elapsed: 8.063666608s Feb 3 13:28:48.713: INFO: Pod "downwardapi-volume-bf73ca50-c8e5-4a83-a7d1-401a25a553bf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.081606847s STEP: Saw pod success Feb 3 13:28:48.714: INFO: Pod "downwardapi-volume-bf73ca50-c8e5-4a83-a7d1-401a25a553bf" satisfied condition "success or failure" Feb 3 13:28:48.720: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-bf73ca50-c8e5-4a83-a7d1-401a25a553bf container client-container: STEP: delete the pod Feb 3 13:28:48.842: INFO: Waiting for pod downwardapi-volume-bf73ca50-c8e5-4a83-a7d1-401a25a553bf to disappear Feb 3 13:28:48.853: INFO: Pod downwardapi-volume-bf73ca50-c8e5-4a83-a7d1-401a25a553bf no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 3 13:28:48.853: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7905" for this suite. Feb 3 13:28:54.885: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 3 13:28:55.034: INFO: namespace downward-api-7905 deletion completed in 6.176249512s • [SLOW TEST:16.574 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 3 13:28:55.034: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Feb 3 13:28:55.167: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Feb 3 13:28:55.179: INFO: Waiting for terminating namespaces to be deleted... Feb 3 13:28:55.181: INFO: Logging pods the kubelet thinks is on node iruya-node before test Feb 3 13:28:55.194: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded) Feb 3 13:28:55.194: INFO: Container weave ready: true, restart count 0 Feb 3 13:28:55.194: INFO: Container weave-npc ready: true, restart count 0 Feb 3 13:28:55.194: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded) Feb 3 13:28:55.194: INFO: Container kube-proxy ready: true, restart count 0 Feb 3 13:28:55.194: INFO: Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test Feb 3 13:28:55.205: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded) Feb 3 13:28:55.205: INFO: Container kube-proxy ready: true, restart count 0 Feb 3 13:28:55.205: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded) Feb 3 13:28:55.205: INFO: Container kube-controller-manager ready: true, restart count 19 Feb 3 13:28:55.205: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded) Feb 3 13:28:55.205: INFO: Container kube-apiserver ready: true, restart count 0 Feb 3 13:28:55.205: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded) Feb 3 13:28:55.205: INFO: Container coredns ready: true, restart count 0 Feb 3 13:28:55.205: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded) Feb 3 13:28:55.205: INFO: Container kube-scheduler ready: true, restart count 13 Feb 3 13:28:55.205: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded) Feb 3 13:28:55.205: INFO: Container weave ready: true, restart count 0 Feb 3 13:28:55.205: INFO: Container weave-npc ready: true, restart count 0 Feb 3 13:28:55.205: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded) Feb 3 13:28:55.205: INFO: Container coredns ready: true, restart count 0 Feb 3 13:28:55.205: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded) Feb 3 13:28:55.205: INFO: Container etcd ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: verifying the node has the label node iruya-node STEP: verifying the node has the label node iruya-server-sfge57q7djm7 Feb 3 13:28:55.351: INFO: Pod coredns-5c98db65d4-bm4gs requesting resource cpu=100m on Node iruya-server-sfge57q7djm7 Feb 3 13:28:55.351: INFO: Pod coredns-5c98db65d4-xx8w8 requesting resource cpu=100m on Node iruya-server-sfge57q7djm7 Feb 3 13:28:55.351: INFO: Pod etcd-iruya-server-sfge57q7djm7 requesting resource cpu=0m on Node iruya-server-sfge57q7djm7 Feb 3 13:28:55.351: INFO: Pod kube-apiserver-iruya-server-sfge57q7djm7 requesting resource cpu=250m on Node iruya-server-sfge57q7djm7 Feb 3 13:28:55.351: INFO: Pod kube-controller-manager-iruya-server-sfge57q7djm7 requesting resource cpu=200m on Node iruya-server-sfge57q7djm7 Feb 3 13:28:55.351: INFO: Pod kube-proxy-58v95 requesting resource cpu=0m on Node iruya-server-sfge57q7djm7 Feb 3 13:28:55.351: INFO: Pod kube-proxy-976zl requesting resource cpu=0m on Node iruya-node Feb 3 13:28:55.351: INFO: Pod kube-scheduler-iruya-server-sfge57q7djm7 requesting resource cpu=100m on Node iruya-server-sfge57q7djm7 Feb 3 13:28:55.351: INFO: Pod weave-net-bzl4d requesting resource cpu=20m on Node iruya-server-sfge57q7djm7 Feb 3 13:28:55.351: INFO: Pod weave-net-rlp57 requesting resource cpu=20m on Node iruya-node STEP: Starting Pods to consume most of the cluster CPU. STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-1130eaba-1a59-47ae-a538-99479d9ff14b.15efe7810270db08], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9086/filler-pod-1130eaba-1a59-47ae-a538-99479d9ff14b to iruya-server-sfge57q7djm7] STEP: Considering event: Type = [Normal], Name = [filler-pod-1130eaba-1a59-47ae-a538-99479d9ff14b.15efe7823eee6bf8], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-1130eaba-1a59-47ae-a538-99479d9ff14b.15efe7831962b526], Reason = [Created], Message = [Created container filler-pod-1130eaba-1a59-47ae-a538-99479d9ff14b] STEP: Considering event: Type = [Normal], Name = [filler-pod-1130eaba-1a59-47ae-a538-99479d9ff14b.15efe78338acac46], Reason = [Started], Message = [Started container filler-pod-1130eaba-1a59-47ae-a538-99479d9ff14b] STEP: Considering event: Type = [Normal], Name = [filler-pod-d345e0ea-95f0-429d-b4a7-68d7ce7af92a.15efe780fa8d24e8], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9086/filler-pod-d345e0ea-95f0-429d-b4a7-68d7ce7af92a to iruya-node] STEP: Considering event: Type = [Normal], Name = [filler-pod-d345e0ea-95f0-429d-b4a7-68d7ce7af92a.15efe7822482b645], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-d345e0ea-95f0-429d-b4a7-68d7ce7af92a.15efe782cd0a9d96], Reason = [Created], Message = [Created container filler-pod-d345e0ea-95f0-429d-b4a7-68d7ce7af92a] STEP: Considering event: Type = [Normal], Name = [filler-pod-d345e0ea-95f0-429d-b4a7-68d7ce7af92a.15efe782ff2b3943], Reason = [Started], Message = [Started container filler-pod-d345e0ea-95f0-429d-b4a7-68d7ce7af92a] STEP: Considering event: Type = [Warning], Name = [additional-pod.15efe783d06bde9a], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 Insufficient cpu.] STEP: removing the label node off the node iruya-node STEP: verifying the node doesn't have the label node STEP: removing the label node off the node iruya-server-sfge57q7djm7 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 3 13:29:08.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-9086" for this suite. Feb 3 13:29:16.776: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 3 13:29:16.924: INFO: namespace sched-pred-9086 deletion completed in 8.189902268s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:21.890 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 3 13:29:16.924: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-8862 STEP: creating a selector STEP: Creating the service pods in kubernetes Feb 3 13:29:18.736: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Feb 3 13:29:52.909: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:pod-network-test-8862 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 3 13:29:52.909: INFO: >>> kubeConfig: /root/.kube/config I0203 13:29:52.990485 8 log.go:172] (0xc000a26370) (0xc001fe8820) Create stream I0203 13:29:52.990518 8 log.go:172] (0xc000a26370) (0xc001fe8820) Stream added, broadcasting: 1 I0203 13:29:52.999057 8 log.go:172] (0xc000a26370) Reply frame received for 1 I0203 13:29:52.999127 8 log.go:172] (0xc000a26370) (0xc00308e000) Create stream I0203 13:29:52.999150 8 log.go:172] (0xc000a26370) (0xc00308e000) Stream added, broadcasting: 3 I0203 13:29:53.001410 8 log.go:172] (0xc000a26370) Reply frame received for 3 I0203 13:29:53.001441 8 log.go:172] (0xc000a26370) (0xc001fe88c0) Create stream I0203 13:29:53.001452 8 log.go:172] (0xc000a26370) (0xc001fe88c0) Stream added, broadcasting: 5 I0203 13:29:53.003428 8 log.go:172] (0xc000a26370) Reply frame received for 5 I0203 13:29:54.152216 8 log.go:172] (0xc000a26370) Data frame received for 3 I0203 13:29:54.152323 8 log.go:172] (0xc00308e000) (3) Data frame handling I0203 13:29:54.152383 8 log.go:172] (0xc00308e000) (3) Data frame sent I0203 13:29:54.284786 8 log.go:172] (0xc000a26370) Data frame received for 1 I0203 13:29:54.285163 8 log.go:172] (0xc000a26370) (0xc00308e000) Stream removed, broadcasting: 3 I0203 13:29:54.285387 8 log.go:172] (0xc001fe8820) (1) Data frame handling I0203 13:29:54.285475 8 log.go:172] (0xc001fe8820) (1) Data frame sent I0203 13:29:54.285534 8 log.go:172] (0xc000a26370) (0xc001fe88c0) Stream removed, broadcasting: 5 I0203 13:29:54.285815 8 log.go:172] (0xc000a26370) (0xc001fe8820) Stream removed, broadcasting: 1 I0203 13:29:54.286047 8 log.go:172] (0xc000a26370) Go away received I0203 13:29:54.286492 8 log.go:172] (0xc000a26370) (0xc001fe8820) Stream removed, broadcasting: 1 I0203 13:29:54.286517 8 log.go:172] (0xc000a26370) (0xc00308e000) Stream removed, broadcasting: 3 I0203 13:29:54.286537 8 log.go:172] (0xc000a26370) (0xc001fe88c0) Stream removed, broadcasting: 5 Feb 3 13:29:54.286: INFO: Found all expected endpoints: [netserver-0] Feb 3 13:29:54.298: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.44.0.1 8081 | grep -v '^\s*$'] Namespace:pod-network-test-8862 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 3 13:29:54.298: INFO: >>> kubeConfig: /root/.kube/config I0203 13:29:54.366907 8 log.go:172] (0xc000c4c580) (0xc002cfe500) Create stream I0203 13:29:54.366951 8 log.go:172] (0xc000c4c580) (0xc002cfe500) Stream added, broadcasting: 1 I0203 13:29:54.372838 8 log.go:172] (0xc000c4c580) Reply frame received for 1 I0203 13:29:54.372876 8 log.go:172] (0xc000c4c580) (0xc00308e0a0) Create stream I0203 13:29:54.372886 8 log.go:172] (0xc000c4c580) (0xc00308e0a0) Stream added, broadcasting: 3 I0203 13:29:54.374476 8 log.go:172] (0xc000c4c580) Reply frame received for 3 I0203 13:29:54.374514 8 log.go:172] (0xc000c4c580) (0xc00308e140) Create stream I0203 13:29:54.374530 8 log.go:172] (0xc000c4c580) (0xc00308e140) Stream added, broadcasting: 5 I0203 13:29:54.375862 8 log.go:172] (0xc000c4c580) Reply frame received for 5 I0203 13:29:55.481674 8 log.go:172] (0xc000c4c580) Data frame received for 3 I0203 13:29:55.481814 8 log.go:172] (0xc00308e0a0) (3) Data frame handling I0203 13:29:55.481871 8 log.go:172] (0xc00308e0a0) (3) Data frame sent I0203 13:29:55.638070 8 log.go:172] (0xc000c4c580) Data frame received for 1 I0203 13:29:55.638238 8 log.go:172] (0xc000c4c580) (0xc00308e0a0) Stream removed, broadcasting: 3 I0203 13:29:55.638368 8 log.go:172] (0xc002cfe500) (1) Data frame handling I0203 13:29:55.638434 8 log.go:172] (0xc002cfe500) (1) Data frame sent I0203 13:29:55.638486 8 log.go:172] (0xc000c4c580) (0xc00308e140) Stream removed, broadcasting: 5 I0203 13:29:55.638538 8 log.go:172] (0xc000c4c580) (0xc002cfe500) Stream removed, broadcasting: 1 I0203 13:29:55.638638 8 log.go:172] (0xc000c4c580) Go away received I0203 13:29:55.639427 8 log.go:172] (0xc000c4c580) (0xc002cfe500) Stream removed, broadcasting: 1 I0203 13:29:55.639472 8 log.go:172] (0xc000c4c580) (0xc00308e0a0) Stream removed, broadcasting: 3 I0203 13:29:55.639494 8 log.go:172] (0xc000c4c580) (0xc00308e140) Stream removed, broadcasting: 5 Feb 3 13:29:55.639: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 3 13:29:55.640: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-8862" for this suite. Feb 3 13:30:21.692: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 3 13:30:21.842: INFO: namespace pod-network-test-8862 deletion completed in 26.17828728s • [SLOW TEST:64.918 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 3 13:30:21.844: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 3 13:30:29.990: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-782" for this suite. Feb 3 13:30:36.277: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 3 13:30:37.008: INFO: namespace kubelet-test-782 deletion completed in 7.002929024s • [SLOW TEST:15.165 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 3 13:30:37.009: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Feb 3 13:30:37.299: INFO: (0) /api/v1/nodes/iruya-node:10250/proxy/logs/:
alternatives.log
alternatives.l... (200; 21.415882ms)
Feb  3 13:30:37.309: INFO: (1) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.273148ms)
Feb  3 13:30:37.321: INFO: (2) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 12.681814ms)
Feb  3 13:30:37.328: INFO: (3) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.565529ms)
Feb  3 13:30:37.336: INFO: (4) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.680589ms)
Feb  3 13:30:37.345: INFO: (5) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.909502ms)
Feb  3 13:30:37.368: INFO: (6) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 23.494476ms)
Feb  3 13:30:37.378: INFO: (7) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 10.036559ms)
Feb  3 13:30:37.386: INFO: (8) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.862948ms)
Feb  3 13:30:37.393: INFO: (9) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.684521ms)
Feb  3 13:30:37.400: INFO: (10) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.043551ms)
Feb  3 13:30:37.407: INFO: (11) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.014639ms)
Feb  3 13:30:37.415: INFO: (12) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.645907ms)
Feb  3 13:30:37.422: INFO: (13) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.635004ms)
Feb  3 13:30:37.427: INFO: (14) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.090396ms)
Feb  3 13:30:37.433: INFO: (15) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.806436ms)
Feb  3 13:30:37.439: INFO: (16) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.173439ms)
Feb  3 13:30:37.445: INFO: (17) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.28752ms)
Feb  3 13:30:37.451: INFO: (18) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.949236ms)
Feb  3 13:30:37.458: INFO: (19) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.544567ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 13:30:37.458: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-4611" for this suite.
Feb  3 13:30:43.527: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 13:30:43.669: INFO: namespace proxy-4611 deletion completed in 6.205675893s

• [SLOW TEST:6.660 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run job 
  should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 13:30:43.670: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1612
[It] should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb  3 13:30:43.747: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-9209'
Feb  3 13:30:45.624: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb  3 13:30:45.624: INFO: stdout: "job.batch/e2e-test-nginx-job created\n"
STEP: verifying the job e2e-test-nginx-job was created
[AfterEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1617
Feb  3 13:30:45.732: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=kubectl-9209'
Feb  3 13:30:45.901: INFO: stderr: ""
Feb  3 13:30:45.902: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 13:30:45.902: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9209" for this suite.
Feb  3 13:30:51.945: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 13:30:52.100: INFO: namespace kubectl-9209 deletion completed in 6.191187917s

• [SLOW TEST:8.430 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a job from an image when restart is OnFailure  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 13:30:52.100: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-0bca9882-c84e-49fd-afda-c62625e537f0
STEP: Creating a pod to test consume secrets
Feb  3 13:30:52.240: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-ef306da2-395f-4c31-9a35-da64cc6bf787" in namespace "projected-7145" to be "success or failure"
Feb  3 13:30:52.249: INFO: Pod "pod-projected-secrets-ef306da2-395f-4c31-9a35-da64cc6bf787": Phase="Pending", Reason="", readiness=false. Elapsed: 8.701774ms
Feb  3 13:30:54.257: INFO: Pod "pod-projected-secrets-ef306da2-395f-4c31-9a35-da64cc6bf787": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017332975s
Feb  3 13:30:56.265: INFO: Pod "pod-projected-secrets-ef306da2-395f-4c31-9a35-da64cc6bf787": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025080803s
Feb  3 13:30:58.275: INFO: Pod "pod-projected-secrets-ef306da2-395f-4c31-9a35-da64cc6bf787": Phase="Pending", Reason="", readiness=false. Elapsed: 6.035325512s
Feb  3 13:31:00.293: INFO: Pod "pod-projected-secrets-ef306da2-395f-4c31-9a35-da64cc6bf787": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.053444729s
STEP: Saw pod success
Feb  3 13:31:00.293: INFO: Pod "pod-projected-secrets-ef306da2-395f-4c31-9a35-da64cc6bf787" satisfied condition "success or failure"
Feb  3 13:31:00.297: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-ef306da2-395f-4c31-9a35-da64cc6bf787 container projected-secret-volume-test: 
STEP: delete the pod
Feb  3 13:31:00.365: INFO: Waiting for pod pod-projected-secrets-ef306da2-395f-4c31-9a35-da64cc6bf787 to disappear
Feb  3 13:31:00.376: INFO: Pod pod-projected-secrets-ef306da2-395f-4c31-9a35-da64cc6bf787 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 13:31:00.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7145" for this suite.
Feb  3 13:31:06.471: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 13:31:06.623: INFO: namespace projected-7145 deletion completed in 6.242140504s

• [SLOW TEST:14.523 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 13:31:06.624: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir volume type on node default medium
Feb  3 13:31:06.747: INFO: Waiting up to 5m0s for pod "pod-93b264c7-0f52-4fda-bfb5-fe44a54d5a9a" in namespace "emptydir-7365" to be "success or failure"
Feb  3 13:31:06.755: INFO: Pod "pod-93b264c7-0f52-4fda-bfb5-fe44a54d5a9a": Phase="Pending", Reason="", readiness=false. Elapsed: 7.387725ms
Feb  3 13:31:08.796: INFO: Pod "pod-93b264c7-0f52-4fda-bfb5-fe44a54d5a9a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049084889s
Feb  3 13:31:10.804: INFO: Pod "pod-93b264c7-0f52-4fda-bfb5-fe44a54d5a9a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.056401866s
Feb  3 13:31:12.827: INFO: Pod "pod-93b264c7-0f52-4fda-bfb5-fe44a54d5a9a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.079449351s
Feb  3 13:31:14.836: INFO: Pod "pod-93b264c7-0f52-4fda-bfb5-fe44a54d5a9a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.08831739s
Feb  3 13:31:16.844: INFO: Pod "pod-93b264c7-0f52-4fda-bfb5-fe44a54d5a9a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.096784151s
STEP: Saw pod success
Feb  3 13:31:16.844: INFO: Pod "pod-93b264c7-0f52-4fda-bfb5-fe44a54d5a9a" satisfied condition "success or failure"
Feb  3 13:31:16.852: INFO: Trying to get logs from node iruya-node pod pod-93b264c7-0f52-4fda-bfb5-fe44a54d5a9a container test-container: 
STEP: delete the pod
Feb  3 13:31:16.960: INFO: Waiting for pod pod-93b264c7-0f52-4fda-bfb5-fe44a54d5a9a to disappear
Feb  3 13:31:16.963: INFO: Pod pod-93b264c7-0f52-4fda-bfb5-fe44a54d5a9a no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 13:31:16.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7365" for this suite.
Feb  3 13:31:22.990: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 13:31:23.086: INFO: namespace emptydir-7365 deletion completed in 6.116727748s

• [SLOW TEST:16.463 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 13:31:23.087: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on tmpfs
Feb  3 13:31:23.155: INFO: Waiting up to 5m0s for pod "pod-4f47c992-4b27-48d9-83c5-646d98412c34" in namespace "emptydir-4826" to be "success or failure"
Feb  3 13:31:23.159: INFO: Pod "pod-4f47c992-4b27-48d9-83c5-646d98412c34": Phase="Pending", Reason="", readiness=false. Elapsed: 3.520545ms
Feb  3 13:31:25.172: INFO: Pod "pod-4f47c992-4b27-48d9-83c5-646d98412c34": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017302606s
Feb  3 13:31:27.206: INFO: Pod "pod-4f47c992-4b27-48d9-83c5-646d98412c34": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050591s
Feb  3 13:31:29.221: INFO: Pod "pod-4f47c992-4b27-48d9-83c5-646d98412c34": Phase="Pending", Reason="", readiness=false. Elapsed: 6.065898314s
Feb  3 13:31:31.234: INFO: Pod "pod-4f47c992-4b27-48d9-83c5-646d98412c34": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.078784221s
STEP: Saw pod success
Feb  3 13:31:31.234: INFO: Pod "pod-4f47c992-4b27-48d9-83c5-646d98412c34" satisfied condition "success or failure"
Feb  3 13:31:31.241: INFO: Trying to get logs from node iruya-node pod pod-4f47c992-4b27-48d9-83c5-646d98412c34 container test-container: 
STEP: delete the pod
Feb  3 13:31:31.279: INFO: Waiting for pod pod-4f47c992-4b27-48d9-83c5-646d98412c34 to disappear
Feb  3 13:31:31.287: INFO: Pod pod-4f47c992-4b27-48d9-83c5-646d98412c34 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 13:31:31.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4826" for this suite.
Feb  3 13:31:37.328: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 13:31:37.450: INFO: namespace emptydir-4826 deletion completed in 6.156657857s

• [SLOW TEST:14.363 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 13:31:37.451: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod liveness-f0e75bf2-5ef2-406b-8842-d14082a70efa in namespace container-probe-880
Feb  3 13:31:45.649: INFO: Started pod liveness-f0e75bf2-5ef2-406b-8842-d14082a70efa in namespace container-probe-880
STEP: checking the pod's current state and verifying that restartCount is present
Feb  3 13:31:45.655: INFO: Initial restart count of pod liveness-f0e75bf2-5ef2-406b-8842-d14082a70efa is 0
Feb  3 13:32:03.779: INFO: Restart count of pod container-probe-880/liveness-f0e75bf2-5ef2-406b-8842-d14082a70efa is now 1 (18.124349689s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 13:32:03.873: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-880" for this suite.
Feb  3 13:32:09.933: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 13:32:10.060: INFO: namespace container-probe-880 deletion completed in 6.1554174s

• [SLOW TEST:32.609 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 13:32:10.060: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the initial replication controller
Feb  3 13:32:10.109: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8743'
Feb  3 13:32:10.646: INFO: stderr: ""
Feb  3 13:32:10.646: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb  3 13:32:10.647: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8743'
Feb  3 13:32:10.923: INFO: stderr: ""
Feb  3 13:32:10.923: INFO: stdout: "update-demo-nautilus-f4sm5 update-demo-nautilus-xwrbw "
Feb  3 13:32:10.924: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-f4sm5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8743'
Feb  3 13:32:11.144: INFO: stderr: ""
Feb  3 13:32:11.144: INFO: stdout: ""
Feb  3 13:32:11.144: INFO: update-demo-nautilus-f4sm5 is created but not running
Feb  3 13:32:16.145: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8743'
Feb  3 13:32:16.772: INFO: stderr: ""
Feb  3 13:32:16.772: INFO: stdout: "update-demo-nautilus-f4sm5 update-demo-nautilus-xwrbw "
Feb  3 13:32:16.772: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-f4sm5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8743'
Feb  3 13:32:17.257: INFO: stderr: ""
Feb  3 13:32:17.257: INFO: stdout: ""
Feb  3 13:32:17.257: INFO: update-demo-nautilus-f4sm5 is created but not running
Feb  3 13:32:22.259: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8743'
Feb  3 13:32:22.436: INFO: stderr: ""
Feb  3 13:32:22.436: INFO: stdout: "update-demo-nautilus-f4sm5 update-demo-nautilus-xwrbw "
Feb  3 13:32:22.436: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-f4sm5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8743'
Feb  3 13:32:22.558: INFO: stderr: ""
Feb  3 13:32:22.558: INFO: stdout: "true"
Feb  3 13:32:22.559: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-f4sm5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8743'
Feb  3 13:32:22.670: INFO: stderr: ""
Feb  3 13:32:22.670: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  3 13:32:22.670: INFO: validating pod update-demo-nautilus-f4sm5
Feb  3 13:32:22.696: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  3 13:32:22.696: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  3 13:32:22.696: INFO: update-demo-nautilus-f4sm5 is verified up and running
Feb  3 13:32:22.696: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xwrbw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8743'
Feb  3 13:32:22.787: INFO: stderr: ""
Feb  3 13:32:22.787: INFO: stdout: "true"
Feb  3 13:32:22.787: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xwrbw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8743'
Feb  3 13:32:22.894: INFO: stderr: ""
Feb  3 13:32:22.894: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  3 13:32:22.894: INFO: validating pod update-demo-nautilus-xwrbw
Feb  3 13:32:22.906: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  3 13:32:22.906: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  3 13:32:22.906: INFO: update-demo-nautilus-xwrbw is verified up and running
STEP: rolling-update to new replication controller
Feb  3 13:32:22.909: INFO: scanned /root for discovery docs: 
Feb  3 13:32:22.909: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-8743'
Feb  3 13:32:53.540: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Feb  3 13:32:53.541: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb  3 13:32:53.543: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8743'
Feb  3 13:32:53.709: INFO: stderr: ""
Feb  3 13:32:53.709: INFO: stdout: "update-demo-kitten-n59mt update-demo-kitten-sgwks update-demo-nautilus-xwrbw "
STEP: Replicas for name=update-demo: expected=2 actual=3
Feb  3 13:32:58.711: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8743'
Feb  3 13:32:58.848: INFO: stderr: ""
Feb  3 13:32:58.849: INFO: stdout: "update-demo-kitten-n59mt update-demo-kitten-sgwks "
Feb  3 13:32:58.849: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-n59mt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8743'
Feb  3 13:32:58.942: INFO: stderr: ""
Feb  3 13:32:58.942: INFO: stdout: "true"
Feb  3 13:32:58.942: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-n59mt -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8743'
Feb  3 13:32:59.018: INFO: stderr: ""
Feb  3 13:32:59.018: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Feb  3 13:32:59.018: INFO: validating pod update-demo-kitten-n59mt
Feb  3 13:32:59.051: INFO: got data: {
  "image": "kitten.jpg"
}

Feb  3 13:32:59.051: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Feb  3 13:32:59.051: INFO: update-demo-kitten-n59mt is verified up and running
Feb  3 13:32:59.051: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-sgwks -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8743'
Feb  3 13:32:59.201: INFO: stderr: ""
Feb  3 13:32:59.201: INFO: stdout: "true"
Feb  3 13:32:59.201: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-sgwks -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8743'
Feb  3 13:32:59.282: INFO: stderr: ""
Feb  3 13:32:59.282: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Feb  3 13:32:59.282: INFO: validating pod update-demo-kitten-sgwks
Feb  3 13:32:59.312: INFO: got data: {
  "image": "kitten.jpg"
}

Feb  3 13:32:59.313: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Feb  3 13:32:59.313: INFO: update-demo-kitten-sgwks is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 13:32:59.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8743" for this suite.
Feb  3 13:33:21.377: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 13:33:21.473: INFO: namespace kubectl-8743 deletion completed in 22.142032005s

• [SLOW TEST:71.413 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should do a rolling update of a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 13:33:21.473: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb  3 13:33:21.790: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"235e416a-8ab0-4d55-86e3-68201592e394", Controller:(*bool)(0xc0029ead52), BlockOwnerDeletion:(*bool)(0xc0029ead53)}}
Feb  3 13:33:21.873: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"8e8576a4-b554-45e7-bcb9-6eb7869108f1", Controller:(*bool)(0xc0029eaefa), BlockOwnerDeletion:(*bool)(0xc0029eaefb)}}
Feb  3 13:33:21.898: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"5fd1ef7a-dfbd-4806-98bd-90c77488ae88", Controller:(*bool)(0xc0029eb0ba), BlockOwnerDeletion:(*bool)(0xc0029eb0bb)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 13:33:26.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-8962" for this suite.
Feb  3 13:33:32.981: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 13:33:33.119: INFO: namespace gc-8962 deletion completed in 6.162819628s

• [SLOW TEST:11.645 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 13:33:33.120: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb  3 13:33:33.236: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cb25b7ee-ace7-4a28-983c-abf08e7d6a50" in namespace "projected-6003" to be "success or failure"
Feb  3 13:33:33.245: INFO: Pod "downwardapi-volume-cb25b7ee-ace7-4a28-983c-abf08e7d6a50": Phase="Pending", Reason="", readiness=false. Elapsed: 8.26164ms
Feb  3 13:33:35.253: INFO: Pod "downwardapi-volume-cb25b7ee-ace7-4a28-983c-abf08e7d6a50": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016471092s
Feb  3 13:33:37.273: INFO: Pod "downwardapi-volume-cb25b7ee-ace7-4a28-983c-abf08e7d6a50": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036161826s
Feb  3 13:33:40.001: INFO: Pod "downwardapi-volume-cb25b7ee-ace7-4a28-983c-abf08e7d6a50": Phase="Pending", Reason="", readiness=false. Elapsed: 6.764715055s
Feb  3 13:33:42.020: INFO: Pod "downwardapi-volume-cb25b7ee-ace7-4a28-983c-abf08e7d6a50": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.783640838s
STEP: Saw pod success
Feb  3 13:33:42.020: INFO: Pod "downwardapi-volume-cb25b7ee-ace7-4a28-983c-abf08e7d6a50" satisfied condition "success or failure"
Feb  3 13:33:42.026: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-cb25b7ee-ace7-4a28-983c-abf08e7d6a50 container client-container: 
STEP: delete the pod
Feb  3 13:33:42.141: INFO: Waiting for pod downwardapi-volume-cb25b7ee-ace7-4a28-983c-abf08e7d6a50 to disappear
Feb  3 13:33:42.148: INFO: Pod downwardapi-volume-cb25b7ee-ace7-4a28-983c-abf08e7d6a50 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 13:33:42.149: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6003" for this suite.
Feb  3 13:33:48.180: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 13:33:48.361: INFO: namespace projected-6003 deletion completed in 6.208503983s

• [SLOW TEST:15.242 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 13:33:48.363: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 13:33:48.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-1704" for this suite.
Feb  3 13:33:54.535: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 13:33:54.663: INFO: namespace services-1704 deletion completed in 6.2055946s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:6.300 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 13:33:54.664: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-f00c3de4-9aa4-4472-8b82-c3bd37fc2cd4
STEP: Creating a pod to test consume configMaps
Feb  3 13:33:54.787: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f5a1d586-5d3e-4f81-8662-d2bb7056312f" in namespace "projected-5714" to be "success or failure"
Feb  3 13:33:54.815: INFO: Pod "pod-projected-configmaps-f5a1d586-5d3e-4f81-8662-d2bb7056312f": Phase="Pending", Reason="", readiness=false. Elapsed: 28.423181ms
Feb  3 13:33:56.826: INFO: Pod "pod-projected-configmaps-f5a1d586-5d3e-4f81-8662-d2bb7056312f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03902015s
Feb  3 13:33:58.835: INFO: Pod "pod-projected-configmaps-f5a1d586-5d3e-4f81-8662-d2bb7056312f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.048462932s
Feb  3 13:34:00.844: INFO: Pod "pod-projected-configmaps-f5a1d586-5d3e-4f81-8662-d2bb7056312f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.056975392s
Feb  3 13:34:02.862: INFO: Pod "pod-projected-configmaps-f5a1d586-5d3e-4f81-8662-d2bb7056312f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.07524639s
STEP: Saw pod success
Feb  3 13:34:02.862: INFO: Pod "pod-projected-configmaps-f5a1d586-5d3e-4f81-8662-d2bb7056312f" satisfied condition "success or failure"
Feb  3 13:34:02.873: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-f5a1d586-5d3e-4f81-8662-d2bb7056312f container projected-configmap-volume-test: 
STEP: delete the pod
Feb  3 13:34:02.929: INFO: Waiting for pod pod-projected-configmaps-f5a1d586-5d3e-4f81-8662-d2bb7056312f to disappear
Feb  3 13:34:03.023: INFO: Pod pod-projected-configmaps-f5a1d586-5d3e-4f81-8662-d2bb7056312f no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 13:34:03.024: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5714" for this suite.
Feb  3 13:34:09.096: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 13:34:09.224: INFO: namespace projected-5714 deletion completed in 6.193473484s

• [SLOW TEST:14.561 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 13:34:09.225: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on node default medium
Feb  3 13:34:09.337: INFO: Waiting up to 5m0s for pod "pod-4f049707-758a-48f5-b8b3-88f1891fb680" in namespace "emptydir-7708" to be "success or failure"
Feb  3 13:34:09.344: INFO: Pod "pod-4f049707-758a-48f5-b8b3-88f1891fb680": Phase="Pending", Reason="", readiness=false. Elapsed: 7.275594ms
Feb  3 13:34:11.856: INFO: Pod "pod-4f049707-758a-48f5-b8b3-88f1891fb680": Phase="Pending", Reason="", readiness=false. Elapsed: 2.519544579s
Feb  3 13:34:13.865: INFO: Pod "pod-4f049707-758a-48f5-b8b3-88f1891fb680": Phase="Pending", Reason="", readiness=false. Elapsed: 4.52793645s
Feb  3 13:34:15.876: INFO: Pod "pod-4f049707-758a-48f5-b8b3-88f1891fb680": Phase="Pending", Reason="", readiness=false. Elapsed: 6.539181108s
Feb  3 13:34:17.908: INFO: Pod "pod-4f049707-758a-48f5-b8b3-88f1891fb680": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.571163665s
STEP: Saw pod success
Feb  3 13:34:17.908: INFO: Pod "pod-4f049707-758a-48f5-b8b3-88f1891fb680" satisfied condition "success or failure"
Feb  3 13:34:17.935: INFO: Trying to get logs from node iruya-node pod pod-4f049707-758a-48f5-b8b3-88f1891fb680 container test-container: 
STEP: delete the pod
Feb  3 13:34:18.576: INFO: Waiting for pod pod-4f049707-758a-48f5-b8b3-88f1891fb680 to disappear
Feb  3 13:34:18.594: INFO: Pod pod-4f049707-758a-48f5-b8b3-88f1891fb680 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 13:34:18.594: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7708" for this suite.
Feb  3 13:34:24.627: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 13:34:24.782: INFO: namespace emptydir-7708 deletion completed in 6.178009307s

• [SLOW TEST:15.557 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 13:34:24.783: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: validating cluster-info
Feb  3 13:34:24.861: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info'
Feb  3 13:34:25.020: INFO: stderr: ""
Feb  3 13:34:25.020: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.57:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.57:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 13:34:25.021: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3319" for this suite.
Feb  3 13:34:31.043: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 13:34:31.193: INFO: namespace kubectl-3319 deletion completed in 6.166156164s

• [SLOW TEST:6.410 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl cluster-info
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if Kubernetes master services is included in cluster-info  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 13:34:31.193: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb  3 13:34:31.342: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8d106e29-4b21-429e-a5ff-44e987bcc81d" in namespace "projected-7516" to be "success or failure"
Feb  3 13:34:31.354: INFO: Pod "downwardapi-volume-8d106e29-4b21-429e-a5ff-44e987bcc81d": Phase="Pending", Reason="", readiness=false. Elapsed: 11.991756ms
Feb  3 13:34:33.369: INFO: Pod "downwardapi-volume-8d106e29-4b21-429e-a5ff-44e987bcc81d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026534421s
Feb  3 13:34:35.377: INFO: Pod "downwardapi-volume-8d106e29-4b21-429e-a5ff-44e987bcc81d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034657674s
Feb  3 13:34:37.394: INFO: Pod "downwardapi-volume-8d106e29-4b21-429e-a5ff-44e987bcc81d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.051554532s
Feb  3 13:34:39.404: INFO: Pod "downwardapi-volume-8d106e29-4b21-429e-a5ff-44e987bcc81d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.062076887s
STEP: Saw pod success
Feb  3 13:34:39.404: INFO: Pod "downwardapi-volume-8d106e29-4b21-429e-a5ff-44e987bcc81d" satisfied condition "success or failure"
Feb  3 13:34:39.409: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-8d106e29-4b21-429e-a5ff-44e987bcc81d container client-container: 
STEP: delete the pod
Feb  3 13:34:39.593: INFO: Waiting for pod downwardapi-volume-8d106e29-4b21-429e-a5ff-44e987bcc81d to disappear
Feb  3 13:34:39.601: INFO: Pod downwardapi-volume-8d106e29-4b21-429e-a5ff-44e987bcc81d no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 13:34:39.601: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7516" for this suite.
Feb  3 13:34:45.634: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 13:34:45.796: INFO: namespace projected-7516 deletion completed in 6.1874584s

• [SLOW TEST:14.603 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 13:34:45.796: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-upd-c116b5f5-5850-4482-bfd3-a791e05af7df
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-c116b5f5-5850-4482-bfd3-a791e05af7df
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 13:34:58.205: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5562" for this suite.
Feb  3 13:35:20.264: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 13:35:20.427: INFO: namespace configmap-5562 deletion completed in 22.214321953s

• [SLOW TEST:34.631 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 13:35:20.428: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on tmpfs
Feb  3 13:35:20.525: INFO: Waiting up to 5m0s for pod "pod-f0ac75c6-ef82-4614-8987-4079c2f8323a" in namespace "emptydir-8012" to be "success or failure"
Feb  3 13:35:20.543: INFO: Pod "pod-f0ac75c6-ef82-4614-8987-4079c2f8323a": Phase="Pending", Reason="", readiness=false. Elapsed: 16.972119ms
Feb  3 13:35:22.569: INFO: Pod "pod-f0ac75c6-ef82-4614-8987-4079c2f8323a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043777962s
Feb  3 13:35:24.582: INFO: Pod "pod-f0ac75c6-ef82-4614-8987-4079c2f8323a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.056882271s
Feb  3 13:35:26.599: INFO: Pod "pod-f0ac75c6-ef82-4614-8987-4079c2f8323a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.073356109s
Feb  3 13:35:28.616: INFO: Pod "pod-f0ac75c6-ef82-4614-8987-4079c2f8323a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.089935869s
STEP: Saw pod success
Feb  3 13:35:28.616: INFO: Pod "pod-f0ac75c6-ef82-4614-8987-4079c2f8323a" satisfied condition "success or failure"
Feb  3 13:35:28.620: INFO: Trying to get logs from node iruya-node pod pod-f0ac75c6-ef82-4614-8987-4079c2f8323a container test-container: 
STEP: delete the pod
Feb  3 13:35:28.805: INFO: Waiting for pod pod-f0ac75c6-ef82-4614-8987-4079c2f8323a to disappear
Feb  3 13:35:28.827: INFO: Pod pod-f0ac75c6-ef82-4614-8987-4079c2f8323a no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 13:35:28.827: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8012" for this suite.
Feb  3 13:35:34.872: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 13:35:35.036: INFO: namespace emptydir-8012 deletion completed in 6.203850871s

• [SLOW TEST:14.609 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 13:35:35.037: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb  3 13:35:35.148: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8ee05948-db7b-4fa7-a3cd-8dbf6e478cfe" in namespace "projected-8011" to be "success or failure"
Feb  3 13:35:35.157: INFO: Pod "downwardapi-volume-8ee05948-db7b-4fa7-a3cd-8dbf6e478cfe": Phase="Pending", Reason="", readiness=false. Elapsed: 8.644051ms
Feb  3 13:35:37.171: INFO: Pod "downwardapi-volume-8ee05948-db7b-4fa7-a3cd-8dbf6e478cfe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022678191s
Feb  3 13:35:39.183: INFO: Pod "downwardapi-volume-8ee05948-db7b-4fa7-a3cd-8dbf6e478cfe": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035133975s
Feb  3 13:35:41.196: INFO: Pod "downwardapi-volume-8ee05948-db7b-4fa7-a3cd-8dbf6e478cfe": Phase="Pending", Reason="", readiness=false. Elapsed: 6.047527387s
Feb  3 13:35:43.710: INFO: Pod "downwardapi-volume-8ee05948-db7b-4fa7-a3cd-8dbf6e478cfe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.561865832s
STEP: Saw pod success
Feb  3 13:35:43.710: INFO: Pod "downwardapi-volume-8ee05948-db7b-4fa7-a3cd-8dbf6e478cfe" satisfied condition "success or failure"
Feb  3 13:35:43.719: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-8ee05948-db7b-4fa7-a3cd-8dbf6e478cfe container client-container: 
STEP: delete the pod
Feb  3 13:35:43.934: INFO: Waiting for pod downwardapi-volume-8ee05948-db7b-4fa7-a3cd-8dbf6e478cfe to disappear
Feb  3 13:35:43.940: INFO: Pod downwardapi-volume-8ee05948-db7b-4fa7-a3cd-8dbf6e478cfe no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 13:35:43.940: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8011" for this suite.
Feb  3 13:35:49.975: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 13:35:50.067: INFO: namespace projected-8011 deletion completed in 6.12079828s

• [SLOW TEST:15.031 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 13:35:50.068: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on node default medium
Feb  3 13:35:50.176: INFO: Waiting up to 5m0s for pod "pod-cfdd77d0-b31d-4de1-bfc1-c21320c3c792" in namespace "emptydir-2048" to be "success or failure"
Feb  3 13:35:50.191: INFO: Pod "pod-cfdd77d0-b31d-4de1-bfc1-c21320c3c792": Phase="Pending", Reason="", readiness=false. Elapsed: 14.957233ms
Feb  3 13:35:52.223: INFO: Pod "pod-cfdd77d0-b31d-4de1-bfc1-c21320c3c792": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047115683s
Feb  3 13:35:54.239: INFO: Pod "pod-cfdd77d0-b31d-4de1-bfc1-c21320c3c792": Phase="Pending", Reason="", readiness=false. Elapsed: 4.062659514s
Feb  3 13:35:56.252: INFO: Pod "pod-cfdd77d0-b31d-4de1-bfc1-c21320c3c792": Phase="Pending", Reason="", readiness=false. Elapsed: 6.076327379s
Feb  3 13:35:58.260: INFO: Pod "pod-cfdd77d0-b31d-4de1-bfc1-c21320c3c792": Phase="Pending", Reason="", readiness=false. Elapsed: 8.084170563s
Feb  3 13:36:00.275: INFO: Pod "pod-cfdd77d0-b31d-4de1-bfc1-c21320c3c792": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.098813297s
STEP: Saw pod success
Feb  3 13:36:00.275: INFO: Pod "pod-cfdd77d0-b31d-4de1-bfc1-c21320c3c792" satisfied condition "success or failure"
Feb  3 13:36:00.281: INFO: Trying to get logs from node iruya-node pod pod-cfdd77d0-b31d-4de1-bfc1-c21320c3c792 container test-container: 
STEP: delete the pod
Feb  3 13:36:00.432: INFO: Waiting for pod pod-cfdd77d0-b31d-4de1-bfc1-c21320c3c792 to disappear
Feb  3 13:36:00.471: INFO: Pod pod-cfdd77d0-b31d-4de1-bfc1-c21320c3c792 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 13:36:00.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2048" for this suite.
Feb  3 13:36:06.542: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 13:36:06.712: INFO: namespace emptydir-2048 deletion completed in 6.23008349s

• [SLOW TEST:16.644 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 13:36:06.713: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-6837c177-2570-45ce-8d2e-ae23f557fce1
STEP: Creating a pod to test consume configMaps
Feb  3 13:36:06.838: INFO: Waiting up to 5m0s for pod "pod-configmaps-063c2763-d931-40e0-b4b0-01f7d14c1fd4" in namespace "configmap-8385" to be "success or failure"
Feb  3 13:36:06.868: INFO: Pod "pod-configmaps-063c2763-d931-40e0-b4b0-01f7d14c1fd4": Phase="Pending", Reason="", readiness=false. Elapsed: 29.299524ms
Feb  3 13:36:08.881: INFO: Pod "pod-configmaps-063c2763-d931-40e0-b4b0-01f7d14c1fd4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042895862s
Feb  3 13:36:10.892: INFO: Pod "pod-configmaps-063c2763-d931-40e0-b4b0-01f7d14c1fd4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053495904s
Feb  3 13:36:12.916: INFO: Pod "pod-configmaps-063c2763-d931-40e0-b4b0-01f7d14c1fd4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.077834004s
Feb  3 13:36:14.931: INFO: Pod "pod-configmaps-063c2763-d931-40e0-b4b0-01f7d14c1fd4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.093184262s
STEP: Saw pod success
Feb  3 13:36:14.931: INFO: Pod "pod-configmaps-063c2763-d931-40e0-b4b0-01f7d14c1fd4" satisfied condition "success or failure"
Feb  3 13:36:14.943: INFO: Trying to get logs from node iruya-node pod pod-configmaps-063c2763-d931-40e0-b4b0-01f7d14c1fd4 container configmap-volume-test: 
STEP: delete the pod
Feb  3 13:36:15.113: INFO: Waiting for pod pod-configmaps-063c2763-d931-40e0-b4b0-01f7d14c1fd4 to disappear
Feb  3 13:36:15.121: INFO: Pod pod-configmaps-063c2763-d931-40e0-b4b0-01f7d14c1fd4 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 13:36:15.121: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8385" for this suite.
Feb  3 13:36:21.144: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 13:36:21.284: INFO: namespace configmap-8385 deletion completed in 6.157696894s

• [SLOW TEST:14.572 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 13:36:21.285: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb  3 13:36:21.392: INFO: (0) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 13.72888ms)
Feb  3 13:36:21.443: INFO: (1) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 51.786604ms)
Feb  3 13:36:21.451: INFO: (2) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.384448ms)
Feb  3 13:36:21.456: INFO: (3) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.382866ms)
Feb  3 13:36:21.462: INFO: (4) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.566677ms)
Feb  3 13:36:21.467: INFO: (5) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.964699ms)
Feb  3 13:36:21.472: INFO: (6) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.417879ms)
Feb  3 13:36:21.477: INFO: (7) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.94605ms)
Feb  3 13:36:21.483: INFO: (8) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.783823ms)
Feb  3 13:36:21.488: INFO: (9) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.776836ms)
Feb  3 13:36:21.494: INFO: (10) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.098706ms)
Feb  3 13:36:21.500: INFO: (11) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.98329ms)
Feb  3 13:36:21.510: INFO: (12) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.58988ms)
Feb  3 13:36:21.518: INFO: (13) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.683397ms)
Feb  3 13:36:21.523: INFO: (14) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.013284ms)
Feb  3 13:36:21.531: INFO: (15) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.244201ms)
Feb  3 13:36:21.536: INFO: (16) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.163036ms)
Feb  3 13:36:21.542: INFO: (17) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.980347ms)
Feb  3 13:36:21.554: INFO: (18) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 12.168077ms)
Feb  3 13:36:21.565: INFO: (19) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 10.460766ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 13:36:21.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-4558" for this suite.
Feb  3 13:36:27.601: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 13:36:27.731: INFO: namespace proxy-4558 deletion completed in 6.15881363s

• [SLOW TEST:6.445 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy logs on node using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl label 
  should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 13:36:27.731: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1210
STEP: creating the pod
Feb  3 13:36:27.798: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5121'
Feb  3 13:36:28.249: INFO: stderr: ""
Feb  3 13:36:28.249: INFO: stdout: "pod/pause created\n"
Feb  3 13:36:28.249: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
Feb  3 13:36:28.249: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-5121" to be "running and ready"
Feb  3 13:36:28.253: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 3.966223ms
Feb  3 13:36:30.267: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017470615s
Feb  3 13:36:32.279: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029938179s
Feb  3 13:36:34.308: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.05842484s
Feb  3 13:36:36.330: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 8.080227576s
Feb  3 13:36:36.330: INFO: Pod "pause" satisfied condition "running and ready"
Feb  3 13:36:36.330: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: adding the label testing-label with value testing-label-value to a pod
Feb  3 13:36:36.331: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-5121'
Feb  3 13:36:37.140: INFO: stderr: ""
Feb  3 13:36:37.140: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
Feb  3 13:36:37.141: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-5121'
Feb  3 13:36:37.265: INFO: stderr: ""
Feb  3 13:36:37.265: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          9s    testing-label-value\n"
STEP: removing the label testing-label of a pod
Feb  3 13:36:37.265: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-5121'
Feb  3 13:36:37.393: INFO: stderr: ""
Feb  3 13:36:37.393: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
Feb  3 13:36:37.393: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-5121'
Feb  3 13:36:37.484: INFO: stderr: ""
Feb  3 13:36:37.484: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          9s    \n"
[AfterEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1217
STEP: using delete to clean up resources
Feb  3 13:36:37.485: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5121'
Feb  3 13:36:37.622: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  3 13:36:37.623: INFO: stdout: "pod \"pause\" force deleted\n"
Feb  3 13:36:37.623: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-5121'
Feb  3 13:36:37.805: INFO: stderr: "No resources found.\n"
Feb  3 13:36:37.805: INFO: stdout: ""
Feb  3 13:36:37.805: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-5121 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb  3 13:36:37.914: INFO: stderr: ""
Feb  3 13:36:37.915: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 13:36:37.915: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5121" for this suite.
Feb  3 13:36:44.012: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 13:36:44.170: INFO: namespace kubectl-5121 deletion completed in 6.241883961s

• [SLOW TEST:16.439 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should update the label on a resource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 13:36:44.171: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Feb  3 13:36:52.847: INFO: Successfully updated pod "pod-update-activedeadlineseconds-294ec7dc-6bce-482c-9cca-d8a4b7818e88"
Feb  3 13:36:52.848: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-294ec7dc-6bce-482c-9cca-d8a4b7818e88" in namespace "pods-1889" to be "terminated due to deadline exceeded"
Feb  3 13:36:52.863: INFO: Pod "pod-update-activedeadlineseconds-294ec7dc-6bce-482c-9cca-d8a4b7818e88": Phase="Running", Reason="", readiness=true. Elapsed: 15.68585ms
Feb  3 13:36:54.889: INFO: Pod "pod-update-activedeadlineseconds-294ec7dc-6bce-482c-9cca-d8a4b7818e88": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.041542985s
Feb  3 13:36:54.889: INFO: Pod "pod-update-activedeadlineseconds-294ec7dc-6bce-482c-9cca-d8a4b7818e88" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 13:36:54.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-1889" for this suite.
Feb  3 13:37:00.951: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 13:37:01.047: INFO: namespace pods-1889 deletion completed in 6.136497075s

• [SLOW TEST:16.876 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run default 
  should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 13:37:01.048: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1420
[It] should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb  3 13:37:01.162: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-1220'
Feb  3 13:37:01.320: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb  3 13:37:01.320: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created
[AfterEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1426
Feb  3 13:37:03.353: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-1220'
Feb  3 13:37:03.666: INFO: stderr: ""
Feb  3 13:37:03.667: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 13:37:03.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1220" for this suite.
Feb  3 13:37:09.734: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 13:37:09.913: INFO: namespace kubectl-1220 deletion completed in 6.236363102s

• [SLOW TEST:8.865 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create an rc or deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl expose 
  should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 13:37:09.913: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating Redis RC
Feb  3 13:37:09.967: INFO: namespace kubectl-3797
Feb  3 13:37:09.968: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3797'
Feb  3 13:37:10.390: INFO: stderr: ""
Feb  3 13:37:10.390: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Feb  3 13:37:11.404: INFO: Selector matched 1 pods for map[app:redis]
Feb  3 13:37:11.404: INFO: Found 0 / 1
Feb  3 13:37:12.405: INFO: Selector matched 1 pods for map[app:redis]
Feb  3 13:37:12.405: INFO: Found 0 / 1
Feb  3 13:37:13.406: INFO: Selector matched 1 pods for map[app:redis]
Feb  3 13:37:13.406: INFO: Found 0 / 1
Feb  3 13:37:14.401: INFO: Selector matched 1 pods for map[app:redis]
Feb  3 13:37:14.401: INFO: Found 0 / 1
Feb  3 13:37:15.404: INFO: Selector matched 1 pods for map[app:redis]
Feb  3 13:37:15.404: INFO: Found 0 / 1
Feb  3 13:37:16.402: INFO: Selector matched 1 pods for map[app:redis]
Feb  3 13:37:16.402: INFO: Found 0 / 1
Feb  3 13:37:17.403: INFO: Selector matched 1 pods for map[app:redis]
Feb  3 13:37:17.403: INFO: Found 1 / 1
Feb  3 13:37:17.403: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Feb  3 13:37:17.409: INFO: Selector matched 1 pods for map[app:redis]
Feb  3 13:37:17.409: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Feb  3 13:37:17.409: INFO: wait on redis-master startup in kubectl-3797 
Feb  3 13:37:17.409: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-5j7xt redis-master --namespace=kubectl-3797'
Feb  3 13:37:17.649: INFO: stderr: ""
Feb  3 13:37:17.649: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 03 Feb 13:37:16.346 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 03 Feb 13:37:16.346 # Server started, Redis version 3.2.12\n1:M 03 Feb 13:37:16.346 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 03 Feb 13:37:16.347 * The server is now ready to accept connections on port 6379\n"
STEP: exposing RC
Feb  3 13:37:17.649: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-3797'
Feb  3 13:37:17.900: INFO: stderr: ""
Feb  3 13:37:17.900: INFO: stdout: "service/rm2 exposed\n"
Feb  3 13:37:17.906: INFO: Service rm2 in namespace kubectl-3797 found.
STEP: exposing service
Feb  3 13:37:19.922: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-3797'
Feb  3 13:37:20.109: INFO: stderr: ""
Feb  3 13:37:20.109: INFO: stdout: "service/rm3 exposed\n"
Feb  3 13:37:20.116: INFO: Service rm3 in namespace kubectl-3797 found.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 13:37:22.136: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3797" for this suite.
Feb  3 13:37:44.171: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 13:37:44.290: INFO: namespace kubectl-3797 deletion completed in 22.14542863s

• [SLOW TEST:34.377 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl expose
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create services for rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 13:37:44.291: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test externalName service
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7453.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-7453.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7453.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-7453.svc.cluster.local; sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb  3 13:37:56.460: INFO: File wheezy_udp@dns-test-service-3.dns-7453.svc.cluster.local from pod  dns-7453/dns-test-ff8d1799-41a0-4c0b-999d-77a4a0a55e70 contains '' instead of 'foo.example.com.'
Feb  3 13:37:56.479: INFO: File jessie_udp@dns-test-service-3.dns-7453.svc.cluster.local from pod  dns-7453/dns-test-ff8d1799-41a0-4c0b-999d-77a4a0a55e70 contains '' instead of 'foo.example.com.'
Feb  3 13:37:56.479: INFO: Lookups using dns-7453/dns-test-ff8d1799-41a0-4c0b-999d-77a4a0a55e70 failed for: [wheezy_udp@dns-test-service-3.dns-7453.svc.cluster.local jessie_udp@dns-test-service-3.dns-7453.svc.cluster.local]

Feb  3 13:38:01.557: INFO: File wheezy_udp@dns-test-service-3.dns-7453.svc.cluster.local from pod  dns-7453/dns-test-ff8d1799-41a0-4c0b-999d-77a4a0a55e70 contains '' instead of 'foo.example.com.'
Feb  3 13:38:01.576: INFO: Lookups using dns-7453/dns-test-ff8d1799-41a0-4c0b-999d-77a4a0a55e70 failed for: [wheezy_udp@dns-test-service-3.dns-7453.svc.cluster.local]

Feb  3 13:38:06.555: INFO: DNS probes using dns-test-ff8d1799-41a0-4c0b-999d-77a4a0a55e70 succeeded

STEP: deleting the pod
STEP: changing the externalName to bar.example.com
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7453.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-7453.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7453.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-7453.svc.cluster.local; sleep 1; done

STEP: creating a second pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb  3 13:38:20.819: INFO: File wheezy_udp@dns-test-service-3.dns-7453.svc.cluster.local from pod  dns-7453/dns-test-7a3e1e09-b10d-4533-9605-98337e9d38da contains '' instead of 'bar.example.com.'
Feb  3 13:38:20.829: INFO: File jessie_udp@dns-test-service-3.dns-7453.svc.cluster.local from pod  dns-7453/dns-test-7a3e1e09-b10d-4533-9605-98337e9d38da contains '' instead of 'bar.example.com.'
Feb  3 13:38:20.829: INFO: Lookups using dns-7453/dns-test-7a3e1e09-b10d-4533-9605-98337e9d38da failed for: [wheezy_udp@dns-test-service-3.dns-7453.svc.cluster.local jessie_udp@dns-test-service-3.dns-7453.svc.cluster.local]

Feb  3 13:38:25.841: INFO: File wheezy_udp@dns-test-service-3.dns-7453.svc.cluster.local from pod  dns-7453/dns-test-7a3e1e09-b10d-4533-9605-98337e9d38da contains 'foo.example.com.
' instead of 'bar.example.com.'
Feb  3 13:38:25.846: INFO: File jessie_udp@dns-test-service-3.dns-7453.svc.cluster.local from pod  dns-7453/dns-test-7a3e1e09-b10d-4533-9605-98337e9d38da contains 'foo.example.com.
' instead of 'bar.example.com.'
Feb  3 13:38:25.846: INFO: Lookups using dns-7453/dns-test-7a3e1e09-b10d-4533-9605-98337e9d38da failed for: [wheezy_udp@dns-test-service-3.dns-7453.svc.cluster.local jessie_udp@dns-test-service-3.dns-7453.svc.cluster.local]

Feb  3 13:38:30.851: INFO: DNS probes using dns-test-7a3e1e09-b10d-4533-9605-98337e9d38da succeeded

STEP: deleting the pod
STEP: changing the service to type=ClusterIP
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7453.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-7453.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7453.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-7453.svc.cluster.local; sleep 1; done

STEP: creating a third pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb  3 13:38:45.261: INFO: File wheezy_udp@dns-test-service-3.dns-7453.svc.cluster.local from pod  dns-7453/dns-test-498a6253-0119-401e-aa49-3a5c8ba99ff1 contains '' instead of '10.106.78.36'
Feb  3 13:38:45.269: INFO: File jessie_udp@dns-test-service-3.dns-7453.svc.cluster.local from pod  dns-7453/dns-test-498a6253-0119-401e-aa49-3a5c8ba99ff1 contains '' instead of '10.106.78.36'
Feb  3 13:38:45.269: INFO: Lookups using dns-7453/dns-test-498a6253-0119-401e-aa49-3a5c8ba99ff1 failed for: [wheezy_udp@dns-test-service-3.dns-7453.svc.cluster.local jessie_udp@dns-test-service-3.dns-7453.svc.cluster.local]

Feb  3 13:38:50.290: INFO: DNS probes using dns-test-498a6253-0119-401e-aa49-3a5c8ba99ff1 succeeded

STEP: deleting the pod
STEP: deleting the test externalName service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 13:38:50.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-7453" for this suite.
Feb  3 13:38:58.655: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 13:38:58.766: INFO: namespace dns-7453 deletion completed in 8.245709439s

• [SLOW TEST:74.475 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 13:38:58.766: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb  3 13:38:58.817: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace
STEP: Creating rc "condition-test" that asks for more than the allowed pod quota
STEP: Checking rc "condition-test" has the desired failure condition set
STEP: Scaling down rc "condition-test" to satisfy pod quota
Feb  3 13:39:00.980: INFO: Updating replication controller "condition-test"
STEP: Checking rc "condition-test" has no failure condition set
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 13:39:02.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-4921" for this suite.
Feb  3 13:39:12.087: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 13:39:12.190: INFO: namespace replication-controller-4921 deletion completed in 10.132449954s

• [SLOW TEST:13.423 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 13:39:12.190: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating service multi-endpoint-test in namespace services-2051
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2051 to expose endpoints map[]
Feb  3 13:39:12.351: INFO: Get endpoints failed (6.315083ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found
Feb  3 13:39:13.363: INFO: successfully validated that service multi-endpoint-test in namespace services-2051 exposes endpoints map[] (1.018204477s elapsed)
STEP: Creating pod pod1 in namespace services-2051
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2051 to expose endpoints map[pod1:[100]]
Feb  3 13:39:17.470: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.088034128s elapsed, will retry)
Feb  3 13:39:20.524: INFO: successfully validated that service multi-endpoint-test in namespace services-2051 exposes endpoints map[pod1:[100]] (7.141115625s elapsed)
STEP: Creating pod pod2 in namespace services-2051
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2051 to expose endpoints map[pod1:[100] pod2:[101]]
Feb  3 13:39:25.466: INFO: Unexpected endpoints: found map[4445a1ef-6cf4-4655-b930-7ee23209af2a:[100]], expected map[pod1:[100] pod2:[101]] (4.927566261s elapsed, will retry)
Feb  3 13:39:27.505: INFO: successfully validated that service multi-endpoint-test in namespace services-2051 exposes endpoints map[pod1:[100] pod2:[101]] (6.966889838s elapsed)
STEP: Deleting pod pod1 in namespace services-2051
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2051 to expose endpoints map[pod2:[101]]
Feb  3 13:39:28.627: INFO: successfully validated that service multi-endpoint-test in namespace services-2051 exposes endpoints map[pod2:[101]] (1.111494082s elapsed)
STEP: Deleting pod pod2 in namespace services-2051
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2051 to expose endpoints map[]
Feb  3 13:39:28.648: INFO: successfully validated that service multi-endpoint-test in namespace services-2051 exposes endpoints map[] (14.408013ms elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 13:39:28.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-2051" for this suite.
Feb  3 13:39:50.877: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 13:39:51.006: INFO: namespace services-2051 deletion completed in 22.18690148s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:38.815 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 13:39:51.006: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb  3 13:40:19.176: INFO: Container started at 2020-02-03 13:39:57 +0000 UTC, pod became ready at 2020-02-03 13:40:18 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 13:40:19.177: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-6357" for this suite.
Feb  3 13:40:41.214: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 13:40:41.333: INFO: namespace container-probe-6357 deletion completed in 22.146444131s

• [SLOW TEST:50.327 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 13:40:41.333: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 13:40:41.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-3484" for this suite.
Feb  3 13:40:47.573: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 13:40:47.723: INFO: namespace kubelet-test-3484 deletion completed in 6.178212352s

• [SLOW TEST:6.390 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should be possible to delete [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 13:40:47.723: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-a3ad0430-2900-4fd7-b4a2-e28271d439cd
STEP: Creating a pod to test consume secrets
Feb  3 13:40:47.905: INFO: Waiting up to 5m0s for pod "pod-secrets-4e617d27-3a59-4226-934e-829372b32bc2" in namespace "secrets-9197" to be "success or failure"
Feb  3 13:40:47.913: INFO: Pod "pod-secrets-4e617d27-3a59-4226-934e-829372b32bc2": Phase="Pending", Reason="", readiness=false. Elapsed: 7.563293ms
Feb  3 13:40:49.920: INFO: Pod "pod-secrets-4e617d27-3a59-4226-934e-829372b32bc2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014699054s
Feb  3 13:40:52.040: INFO: Pod "pod-secrets-4e617d27-3a59-4226-934e-829372b32bc2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.134662341s
Feb  3 13:40:54.051: INFO: Pod "pod-secrets-4e617d27-3a59-4226-934e-829372b32bc2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.145404231s
Feb  3 13:40:56.058: INFO: Pod "pod-secrets-4e617d27-3a59-4226-934e-829372b32bc2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.152602648s
STEP: Saw pod success
Feb  3 13:40:56.058: INFO: Pod "pod-secrets-4e617d27-3a59-4226-934e-829372b32bc2" satisfied condition "success or failure"
Feb  3 13:40:56.064: INFO: Trying to get logs from node iruya-node pod pod-secrets-4e617d27-3a59-4226-934e-829372b32bc2 container secret-env-test: 
STEP: delete the pod
Feb  3 13:40:56.190: INFO: Waiting for pod pod-secrets-4e617d27-3a59-4226-934e-829372b32bc2 to disappear
Feb  3 13:40:56.199: INFO: Pod pod-secrets-4e617d27-3a59-4226-934e-829372b32bc2 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 13:40:56.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9197" for this suite.
Feb  3 13:41:02.233: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 13:41:02.422: INFO: namespace secrets-9197 deletion completed in 6.215414943s

• [SLOW TEST:14.699 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Aggregator 
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 13:41:02.424: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename aggregator
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76
Feb  3 13:41:02.505: INFO: >>> kubeConfig: /root/.kube/config
[It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Registering the sample API server.
Feb  3 13:41:03.222: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set
Feb  3 13:41:05.455: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716334063, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716334063, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716334063, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716334063, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  3 13:41:07.475: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716334063, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716334063, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716334063, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716334063, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  3 13:41:09.473: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716334063, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716334063, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716334063, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716334063, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  3 13:41:11.465: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716334063, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716334063, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716334063, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716334063, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  3 13:41:18.714: INFO: Waited 5.233854824s for the sample-apiserver to be ready to handle requests.
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 13:41:19.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "aggregator-1539" for this suite.
Feb  3 13:41:25.586: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 13:41:25.764: INFO: namespace aggregator-1539 deletion completed in 6.24261082s

• [SLOW TEST:23.339 seconds]
[sig-api-machinery] Aggregator
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 13:41:25.766: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap configmap-9863/configmap-test-65c313c8-c87a-48e2-ba58-192d9b026a5d
STEP: Creating a pod to test consume configMaps
Feb  3 13:41:25.980: INFO: Waiting up to 5m0s for pod "pod-configmaps-47d8d719-5003-4514-8f89-fa728c2f3929" in namespace "configmap-9863" to be "success or failure"
Feb  3 13:41:26.039: INFO: Pod "pod-configmaps-47d8d719-5003-4514-8f89-fa728c2f3929": Phase="Pending", Reason="", readiness=false. Elapsed: 58.387591ms
Feb  3 13:41:28.059: INFO: Pod "pod-configmaps-47d8d719-5003-4514-8f89-fa728c2f3929": Phase="Pending", Reason="", readiness=false. Elapsed: 2.079090596s
Feb  3 13:41:30.073: INFO: Pod "pod-configmaps-47d8d719-5003-4514-8f89-fa728c2f3929": Phase="Pending", Reason="", readiness=false. Elapsed: 4.092894489s
Feb  3 13:41:32.081: INFO: Pod "pod-configmaps-47d8d719-5003-4514-8f89-fa728c2f3929": Phase="Pending", Reason="", readiness=false. Elapsed: 6.101317189s
Feb  3 13:41:34.130: INFO: Pod "pod-configmaps-47d8d719-5003-4514-8f89-fa728c2f3929": Phase="Pending", Reason="", readiness=false. Elapsed: 8.149589141s
Feb  3 13:41:36.145: INFO: Pod "pod-configmaps-47d8d719-5003-4514-8f89-fa728c2f3929": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.165193392s
STEP: Saw pod success
Feb  3 13:41:36.146: INFO: Pod "pod-configmaps-47d8d719-5003-4514-8f89-fa728c2f3929" satisfied condition "success or failure"
Feb  3 13:41:36.152: INFO: Trying to get logs from node iruya-node pod pod-configmaps-47d8d719-5003-4514-8f89-fa728c2f3929 container env-test: 
STEP: delete the pod
Feb  3 13:41:36.252: INFO: Waiting for pod pod-configmaps-47d8d719-5003-4514-8f89-fa728c2f3929 to disappear
Feb  3 13:41:36.264: INFO: Pod pod-configmaps-47d8d719-5003-4514-8f89-fa728c2f3929 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 13:41:36.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9863" for this suite.
Feb  3 13:41:42.385: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 13:41:42.587: INFO: namespace configmap-9863 deletion completed in 6.247082195s

• [SLOW TEST:16.822 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 13:41:42.590: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Feb  3 13:41:59.836: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb  3 13:41:59.878: INFO: Pod pod-with-poststart-http-hook still exists
Feb  3 13:42:01.879: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb  3 13:42:01.887: INFO: Pod pod-with-poststart-http-hook still exists
Feb  3 13:42:03.879: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb  3 13:42:03.893: INFO: Pod pod-with-poststart-http-hook still exists
Feb  3 13:42:05.879: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb  3 13:42:05.896: INFO: Pod pod-with-poststart-http-hook still exists
Feb  3 13:42:07.879: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb  3 13:42:07.891: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 13:42:07.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-4448" for this suite.
Feb  3 13:42:31.961: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 13:42:32.121: INFO: namespace container-lifecycle-hook-4448 deletion completed in 24.222028512s

• [SLOW TEST:49.531 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 13:42:32.122: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb  3 13:42:32.268: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Feb  3 13:42:32.298: INFO: Number of nodes with available pods: 0
Feb  3 13:42:32.298: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Feb  3 13:42:32.368: INFO: Number of nodes with available pods: 0
Feb  3 13:42:32.368: INFO: Node iruya-node is running more than one daemon pod
Feb  3 13:42:33.377: INFO: Number of nodes with available pods: 0
Feb  3 13:42:33.378: INFO: Node iruya-node is running more than one daemon pod
Feb  3 13:42:34.378: INFO: Number of nodes with available pods: 0
Feb  3 13:42:34.378: INFO: Node iruya-node is running more than one daemon pod
Feb  3 13:42:35.375: INFO: Number of nodes with available pods: 0
Feb  3 13:42:35.375: INFO: Node iruya-node is running more than one daemon pod
Feb  3 13:42:36.379: INFO: Number of nodes with available pods: 0
Feb  3 13:42:36.379: INFO: Node iruya-node is running more than one daemon pod
Feb  3 13:42:37.378: INFO: Number of nodes with available pods: 0
Feb  3 13:42:37.378: INFO: Node iruya-node is running more than one daemon pod
Feb  3 13:42:38.381: INFO: Number of nodes with available pods: 0
Feb  3 13:42:38.381: INFO: Node iruya-node is running more than one daemon pod
Feb  3 13:42:39.380: INFO: Number of nodes with available pods: 0
Feb  3 13:42:39.380: INFO: Node iruya-node is running more than one daemon pod
Feb  3 13:42:40.380: INFO: Number of nodes with available pods: 1
Feb  3 13:42:40.380: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Feb  3 13:42:40.473: INFO: Number of nodes with available pods: 1
Feb  3 13:42:40.473: INFO: Number of running nodes: 0, number of available pods: 1
Feb  3 13:42:41.482: INFO: Number of nodes with available pods: 0
Feb  3 13:42:41.482: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Feb  3 13:42:41.523: INFO: Number of nodes with available pods: 0
Feb  3 13:42:41.523: INFO: Node iruya-node is running more than one daemon pod
Feb  3 13:42:42.540: INFO: Number of nodes with available pods: 0
Feb  3 13:42:42.540: INFO: Node iruya-node is running more than one daemon pod
Feb  3 13:42:43.534: INFO: Number of nodes with available pods: 0
Feb  3 13:42:43.534: INFO: Node iruya-node is running more than one daemon pod
Feb  3 13:42:44.542: INFO: Number of nodes with available pods: 0
Feb  3 13:42:44.542: INFO: Node iruya-node is running more than one daemon pod
Feb  3 13:42:45.529: INFO: Number of nodes with available pods: 0
Feb  3 13:42:45.529: INFO: Node iruya-node is running more than one daemon pod
Feb  3 13:42:46.534: INFO: Number of nodes with available pods: 0
Feb  3 13:42:46.534: INFO: Node iruya-node is running more than one daemon pod
Feb  3 13:42:47.533: INFO: Number of nodes with available pods: 0
Feb  3 13:42:47.533: INFO: Node iruya-node is running more than one daemon pod
Feb  3 13:42:48.540: INFO: Number of nodes with available pods: 0
Feb  3 13:42:48.540: INFO: Node iruya-node is running more than one daemon pod
Feb  3 13:42:49.532: INFO: Number of nodes with available pods: 0
Feb  3 13:42:49.533: INFO: Node iruya-node is running more than one daemon pod
Feb  3 13:42:50.539: INFO: Number of nodes with available pods: 0
Feb  3 13:42:50.539: INFO: Node iruya-node is running more than one daemon pod
Feb  3 13:42:51.531: INFO: Number of nodes with available pods: 0
Feb  3 13:42:51.532: INFO: Node iruya-node is running more than one daemon pod
Feb  3 13:42:52.543: INFO: Number of nodes with available pods: 0
Feb  3 13:42:52.543: INFO: Node iruya-node is running more than one daemon pod
Feb  3 13:42:53.532: INFO: Number of nodes with available pods: 0
Feb  3 13:42:53.532: INFO: Node iruya-node is running more than one daemon pod
Feb  3 13:42:54.537: INFO: Number of nodes with available pods: 0
Feb  3 13:42:54.537: INFO: Node iruya-node is running more than one daemon pod
Feb  3 13:42:55.533: INFO: Number of nodes with available pods: 0
Feb  3 13:42:55.533: INFO: Node iruya-node is running more than one daemon pod
Feb  3 13:42:56.576: INFO: Number of nodes with available pods: 0
Feb  3 13:42:56.576: INFO: Node iruya-node is running more than one daemon pod
Feb  3 13:42:57.532: INFO: Number of nodes with available pods: 0
Feb  3 13:42:57.532: INFO: Node iruya-node is running more than one daemon pod
Feb  3 13:42:58.540: INFO: Number of nodes with available pods: 0
Feb  3 13:42:58.540: INFO: Node iruya-node is running more than one daemon pod
Feb  3 13:42:59.539: INFO: Number of nodes with available pods: 0
Feb  3 13:42:59.539: INFO: Node iruya-node is running more than one daemon pod
Feb  3 13:43:00.540: INFO: Number of nodes with available pods: 0
Feb  3 13:43:00.540: INFO: Node iruya-node is running more than one daemon pod
Feb  3 13:43:01.532: INFO: Number of nodes with available pods: 0
Feb  3 13:43:01.532: INFO: Node iruya-node is running more than one daemon pod
Feb  3 13:43:02.543: INFO: Number of nodes with available pods: 0
Feb  3 13:43:02.543: INFO: Node iruya-node is running more than one daemon pod
Feb  3 13:43:03.533: INFO: Number of nodes with available pods: 0
Feb  3 13:43:03.533: INFO: Node iruya-node is running more than one daemon pod
Feb  3 13:43:04.543: INFO: Number of nodes with available pods: 1
Feb  3 13:43:04.543: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4043, will wait for the garbage collector to delete the pods
Feb  3 13:43:04.629: INFO: Deleting DaemonSet.extensions daemon-set took: 13.249639ms
Feb  3 13:43:04.929: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.464962ms
Feb  3 13:43:16.541: INFO: Number of nodes with available pods: 0
Feb  3 13:43:16.541: INFO: Number of running nodes: 0, number of available pods: 0
Feb  3 13:43:16.548: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4043/daemonsets","resourceVersion":"22944594"},"items":null}

Feb  3 13:43:16.593: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4043/pods","resourceVersion":"22944594"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 13:43:16.637: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-4043" for this suite.
Feb  3 13:43:22.668: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 13:43:22.778: INFO: namespace daemonsets-4043 deletion completed in 6.135696373s

• [SLOW TEST:50.656 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 13:43:22.779: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name cm-test-opt-del-2ca81730-5546-4203-9d51-cb3ede4b0075
STEP: Creating configMap with name cm-test-opt-upd-b2085879-065a-4428-964d-ca3ebdf478fb
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-2ca81730-5546-4203-9d51-cb3ede4b0075
STEP: Updating configmap cm-test-opt-upd-b2085879-065a-4428-964d-ca3ebdf478fb
STEP: Creating configMap with name cm-test-opt-create-05c9c6de-b4f0-4484-94be-da9324acdfd5
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 13:45:01.163: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7648" for this suite.
Feb  3 13:45:23.208: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 13:45:23.327: INFO: namespace projected-7648 deletion completed in 22.156475939s

• [SLOW TEST:120.549 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 13:45:23.329: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on node default medium
Feb  3 13:45:23.419: INFO: Waiting up to 5m0s for pod "pod-4ea5137f-3f98-4632-8f8f-8b667ded5f34" in namespace "emptydir-9421" to be "success or failure"
Feb  3 13:45:23.435: INFO: Pod "pod-4ea5137f-3f98-4632-8f8f-8b667ded5f34": Phase="Pending", Reason="", readiness=false. Elapsed: 15.198998ms
Feb  3 13:45:25.443: INFO: Pod "pod-4ea5137f-3f98-4632-8f8f-8b667ded5f34": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02378028s
Feb  3 13:45:27.450: INFO: Pod "pod-4ea5137f-3f98-4632-8f8f-8b667ded5f34": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030339842s
Feb  3 13:45:29.460: INFO: Pod "pod-4ea5137f-3f98-4632-8f8f-8b667ded5f34": Phase="Pending", Reason="", readiness=false. Elapsed: 6.040325132s
Feb  3 13:45:31.467: INFO: Pod "pod-4ea5137f-3f98-4632-8f8f-8b667ded5f34": Phase="Pending", Reason="", readiness=false. Elapsed: 8.047020493s
Feb  3 13:45:33.479: INFO: Pod "pod-4ea5137f-3f98-4632-8f8f-8b667ded5f34": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.059335081s
STEP: Saw pod success
Feb  3 13:45:33.479: INFO: Pod "pod-4ea5137f-3f98-4632-8f8f-8b667ded5f34" satisfied condition "success or failure"
Feb  3 13:45:33.483: INFO: Trying to get logs from node iruya-node pod pod-4ea5137f-3f98-4632-8f8f-8b667ded5f34 container test-container: 
STEP: delete the pod
Feb  3 13:45:33.551: INFO: Waiting for pod pod-4ea5137f-3f98-4632-8f8f-8b667ded5f34 to disappear
Feb  3 13:45:33.554: INFO: Pod pod-4ea5137f-3f98-4632-8f8f-8b667ded5f34 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 13:45:33.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9421" for this suite.
Feb  3 13:45:39.586: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 13:45:39.707: INFO: namespace emptydir-9421 deletion completed in 6.148069003s

• [SLOW TEST:16.378 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 13:45:39.708: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb  3 13:45:39.890: INFO: Waiting up to 5m0s for pod "downwardapi-volume-77d5adc9-0d3e-4d98-97dd-dcfaec245fd6" in namespace "projected-2447" to be "success or failure"
Feb  3 13:45:39.935: INFO: Pod "downwardapi-volume-77d5adc9-0d3e-4d98-97dd-dcfaec245fd6": Phase="Pending", Reason="", readiness=false. Elapsed: 44.28962ms
Feb  3 13:45:41.985: INFO: Pod "downwardapi-volume-77d5adc9-0d3e-4d98-97dd-dcfaec245fd6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.094038886s
Feb  3 13:45:44.004: INFO: Pod "downwardapi-volume-77d5adc9-0d3e-4d98-97dd-dcfaec245fd6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.113526403s
Feb  3 13:45:46.045: INFO: Pod "downwardapi-volume-77d5adc9-0d3e-4d98-97dd-dcfaec245fd6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.154355427s
Feb  3 13:45:48.052: INFO: Pod "downwardapi-volume-77d5adc9-0d3e-4d98-97dd-dcfaec245fd6": Phase="Running", Reason="", readiness=true. Elapsed: 8.161321165s
Feb  3 13:45:50.059: INFO: Pod "downwardapi-volume-77d5adc9-0d3e-4d98-97dd-dcfaec245fd6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.168044247s
STEP: Saw pod success
Feb  3 13:45:50.059: INFO: Pod "downwardapi-volume-77d5adc9-0d3e-4d98-97dd-dcfaec245fd6" satisfied condition "success or failure"
Feb  3 13:45:50.062: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-77d5adc9-0d3e-4d98-97dd-dcfaec245fd6 container client-container: 
STEP: delete the pod
Feb  3 13:45:50.121: INFO: Waiting for pod downwardapi-volume-77d5adc9-0d3e-4d98-97dd-dcfaec245fd6 to disappear
Feb  3 13:45:50.155: INFO: Pod downwardapi-volume-77d5adc9-0d3e-4d98-97dd-dcfaec245fd6 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 13:45:50.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2447" for this suite.
Feb  3 13:45:56.178: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 13:45:56.344: INFO: namespace projected-2447 deletion completed in 6.184417824s

• [SLOW TEST:16.636 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 13:45:56.345: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-secret-jn72
STEP: Creating a pod to test atomic-volume-subpath
Feb  3 13:45:56.496: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-jn72" in namespace "subpath-64" to be "success or failure"
Feb  3 13:45:56.526: INFO: Pod "pod-subpath-test-secret-jn72": Phase="Pending", Reason="", readiness=false. Elapsed: 29.914202ms
Feb  3 13:45:58.537: INFO: Pod "pod-subpath-test-secret-jn72": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040624516s
Feb  3 13:46:00.570: INFO: Pod "pod-subpath-test-secret-jn72": Phase="Pending", Reason="", readiness=false. Elapsed: 4.073458495s
Feb  3 13:46:02.586: INFO: Pod "pod-subpath-test-secret-jn72": Phase="Pending", Reason="", readiness=false. Elapsed: 6.0893482s
Feb  3 13:46:04.591: INFO: Pod "pod-subpath-test-secret-jn72": Phase="Running", Reason="", readiness=true. Elapsed: 8.094051925s
Feb  3 13:46:06.631: INFO: Pod "pod-subpath-test-secret-jn72": Phase="Running", Reason="", readiness=true. Elapsed: 10.134692908s
Feb  3 13:46:08.639: INFO: Pod "pod-subpath-test-secret-jn72": Phase="Running", Reason="", readiness=true. Elapsed: 12.142706695s
Feb  3 13:46:10.654: INFO: Pod "pod-subpath-test-secret-jn72": Phase="Running", Reason="", readiness=true. Elapsed: 14.157231103s
Feb  3 13:46:12.663: INFO: Pod "pod-subpath-test-secret-jn72": Phase="Running", Reason="", readiness=true. Elapsed: 16.16681493s
Feb  3 13:46:14.677: INFO: Pod "pod-subpath-test-secret-jn72": Phase="Running", Reason="", readiness=true. Elapsed: 18.180332864s
Feb  3 13:46:16.707: INFO: Pod "pod-subpath-test-secret-jn72": Phase="Running", Reason="", readiness=true. Elapsed: 20.210745445s
Feb  3 13:46:18.730: INFO: Pod "pod-subpath-test-secret-jn72": Phase="Running", Reason="", readiness=true. Elapsed: 22.233628945s
Feb  3 13:46:20.747: INFO: Pod "pod-subpath-test-secret-jn72": Phase="Running", Reason="", readiness=true. Elapsed: 24.250797786s
Feb  3 13:46:23.348: INFO: Pod "pod-subpath-test-secret-jn72": Phase="Running", Reason="", readiness=true. Elapsed: 26.851767479s
Feb  3 13:46:25.356: INFO: Pod "pod-subpath-test-secret-jn72": Phase="Running", Reason="", readiness=true. Elapsed: 28.859449422s
Feb  3 13:46:27.369: INFO: Pod "pod-subpath-test-secret-jn72": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.872399498s
STEP: Saw pod success
Feb  3 13:46:27.369: INFO: Pod "pod-subpath-test-secret-jn72" satisfied condition "success or failure"
Feb  3 13:46:27.374: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-secret-jn72 container test-container-subpath-secret-jn72: 
STEP: delete the pod
Feb  3 13:46:27.477: INFO: Waiting for pod pod-subpath-test-secret-jn72 to disappear
Feb  3 13:46:27.486: INFO: Pod pod-subpath-test-secret-jn72 no longer exists
STEP: Deleting pod pod-subpath-test-secret-jn72
Feb  3 13:46:27.486: INFO: Deleting pod "pod-subpath-test-secret-jn72" in namespace "subpath-64"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 13:46:27.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-64" for this suite.
Feb  3 13:46:33.555: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 13:46:33.713: INFO: namespace subpath-64 deletion completed in 6.198816485s

• [SLOW TEST:37.369 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with secret pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 13:46:33.715: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-0f84639d-e999-4fda-9efe-adb13c52698f
STEP: Creating a pod to test consume secrets
Feb  3 13:46:33.889: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-080d76d2-5bcf-4eb6-8dc8-34e5da6bccd0" in namespace "projected-2250" to be "success or failure"
Feb  3 13:46:33.905: INFO: Pod "pod-projected-secrets-080d76d2-5bcf-4eb6-8dc8-34e5da6bccd0": Phase="Pending", Reason="", readiness=false. Elapsed: 15.971125ms
Feb  3 13:46:35.924: INFO: Pod "pod-projected-secrets-080d76d2-5bcf-4eb6-8dc8-34e5da6bccd0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03435533s
Feb  3 13:46:37.939: INFO: Pod "pod-projected-secrets-080d76d2-5bcf-4eb6-8dc8-34e5da6bccd0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.049741744s
Feb  3 13:46:39.948: INFO: Pod "pod-projected-secrets-080d76d2-5bcf-4eb6-8dc8-34e5da6bccd0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.058245552s
Feb  3 13:46:41.955: INFO: Pod "pod-projected-secrets-080d76d2-5bcf-4eb6-8dc8-34e5da6bccd0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.065773137s
Feb  3 13:46:43.966: INFO: Pod "pod-projected-secrets-080d76d2-5bcf-4eb6-8dc8-34e5da6bccd0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.076358832s
STEP: Saw pod success
Feb  3 13:46:43.966: INFO: Pod "pod-projected-secrets-080d76d2-5bcf-4eb6-8dc8-34e5da6bccd0" satisfied condition "success or failure"
Feb  3 13:46:43.971: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-080d76d2-5bcf-4eb6-8dc8-34e5da6bccd0 container projected-secret-volume-test: 
STEP: delete the pod
Feb  3 13:46:44.037: INFO: Waiting for pod pod-projected-secrets-080d76d2-5bcf-4eb6-8dc8-34e5da6bccd0 to disappear
Feb  3 13:46:44.071: INFO: Pod pod-projected-secrets-080d76d2-5bcf-4eb6-8dc8-34e5da6bccd0 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 13:46:44.071: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2250" for this suite.
Feb  3 13:46:50.100: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 13:46:50.237: INFO: namespace projected-2250 deletion completed in 6.158161812s

• [SLOW TEST:16.522 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 13:46:50.242: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-configmap-h4nq
STEP: Creating a pod to test atomic-volume-subpath
Feb  3 13:46:50.396: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-h4nq" in namespace "subpath-8743" to be "success or failure"
Feb  3 13:46:50.401: INFO: Pod "pod-subpath-test-configmap-h4nq": Phase="Pending", Reason="", readiness=false. Elapsed: 4.391946ms
Feb  3 13:46:52.414: INFO: Pod "pod-subpath-test-configmap-h4nq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017368416s
Feb  3 13:46:54.431: INFO: Pod "pod-subpath-test-configmap-h4nq": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034380382s
Feb  3 13:46:56.441: INFO: Pod "pod-subpath-test-configmap-h4nq": Phase="Pending", Reason="", readiness=false. Elapsed: 6.044748114s
Feb  3 13:46:58.452: INFO: Pod "pod-subpath-test-configmap-h4nq": Phase="Running", Reason="", readiness=true. Elapsed: 8.055538192s
Feb  3 13:47:00.465: INFO: Pod "pod-subpath-test-configmap-h4nq": Phase="Running", Reason="", readiness=true. Elapsed: 10.068554666s
Feb  3 13:47:02.483: INFO: Pod "pod-subpath-test-configmap-h4nq": Phase="Running", Reason="", readiness=true. Elapsed: 12.086553988s
Feb  3 13:47:04.506: INFO: Pod "pod-subpath-test-configmap-h4nq": Phase="Running", Reason="", readiness=true. Elapsed: 14.109228553s
Feb  3 13:47:06.525: INFO: Pod "pod-subpath-test-configmap-h4nq": Phase="Running", Reason="", readiness=true. Elapsed: 16.127935958s
Feb  3 13:47:08.540: INFO: Pod "pod-subpath-test-configmap-h4nq": Phase="Running", Reason="", readiness=true. Elapsed: 18.143166952s
Feb  3 13:47:10.559: INFO: Pod "pod-subpath-test-configmap-h4nq": Phase="Running", Reason="", readiness=true. Elapsed: 20.162762276s
Feb  3 13:47:12.582: INFO: Pod "pod-subpath-test-configmap-h4nq": Phase="Running", Reason="", readiness=true. Elapsed: 22.185699121s
Feb  3 13:47:14.594: INFO: Pod "pod-subpath-test-configmap-h4nq": Phase="Running", Reason="", readiness=true. Elapsed: 24.197449901s
Feb  3 13:47:16.620: INFO: Pod "pod-subpath-test-configmap-h4nq": Phase="Running", Reason="", readiness=true. Elapsed: 26.223727422s
Feb  3 13:47:18.643: INFO: Pod "pod-subpath-test-configmap-h4nq": Phase="Running", Reason="", readiness=true. Elapsed: 28.246704822s
Feb  3 13:47:20.663: INFO: Pod "pod-subpath-test-configmap-h4nq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.266045981s
STEP: Saw pod success
Feb  3 13:47:20.663: INFO: Pod "pod-subpath-test-configmap-h4nq" satisfied condition "success or failure"
Feb  3 13:47:20.668: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-configmap-h4nq container test-container-subpath-configmap-h4nq: 
STEP: delete the pod
Feb  3 13:47:20.737: INFO: Waiting for pod pod-subpath-test-configmap-h4nq to disappear
Feb  3 13:47:20.745: INFO: Pod pod-subpath-test-configmap-h4nq no longer exists
STEP: Deleting pod pod-subpath-test-configmap-h4nq
Feb  3 13:47:20.745: INFO: Deleting pod "pod-subpath-test-configmap-h4nq" in namespace "subpath-8743"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 13:47:20.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-8743" for this suite.
Feb  3 13:47:26.787: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 13:47:26.911: INFO: namespace subpath-8743 deletion completed in 6.155114941s

• [SLOW TEST:36.669 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 13:47:26.911: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W0203 13:47:37.195973       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb  3 13:47:37.196: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 13:47:37.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-2753" for this suite.
Feb  3 13:47:43.261: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 13:47:43.404: INFO: namespace gc-2753 deletion completed in 6.202023248s

• [SLOW TEST:16.493 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 13:47:43.405: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb  3 13:47:43.489: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 13:47:54.058: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-9921" for this suite.
Feb  3 13:48:38.117: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 13:48:38.278: INFO: namespace pods-9921 deletion completed in 44.210088116s

• [SLOW TEST:54.873 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 13:48:38.278: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test hostPath mode
Feb  3 13:48:38.452: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-7830" to be "success or failure"
Feb  3 13:48:38.488: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 35.52359ms
Feb  3 13:48:40.758: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.305524944s
Feb  3 13:48:42.765: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.312512013s
Feb  3 13:48:44.775: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.322769841s
Feb  3 13:48:46.782: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.329320728s
Feb  3 13:48:48.791: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 10.338808161s
Feb  3 13:48:50.797: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.344910149s
STEP: Saw pod success
Feb  3 13:48:50.797: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
Feb  3 13:48:50.799: INFO: Trying to get logs from node iruya-node pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Feb  3 13:48:50.897: INFO: Waiting for pod pod-host-path-test to disappear
Feb  3 13:48:50.907: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 13:48:50.907: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "hostpath-7830" for this suite.
Feb  3 13:48:56.936: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 13:48:57.063: INFO: namespace hostpath-7830 deletion completed in 6.148391296s

• [SLOW TEST:18.785 seconds]
[sig-storage] HostPath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 13:48:57.063: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb  3 13:48:57.253: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9062ee71-519d-42c5-863c-daeaea31d938" in namespace "downward-api-9015" to be "success or failure"
Feb  3 13:48:57.261: INFO: Pod "downwardapi-volume-9062ee71-519d-42c5-863c-daeaea31d938": Phase="Pending", Reason="", readiness=false. Elapsed: 7.621152ms
Feb  3 13:48:59.271: INFO: Pod "downwardapi-volume-9062ee71-519d-42c5-863c-daeaea31d938": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017929854s
Feb  3 13:49:01.285: INFO: Pod "downwardapi-volume-9062ee71-519d-42c5-863c-daeaea31d938": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031223145s
Feb  3 13:49:03.291: INFO: Pod "downwardapi-volume-9062ee71-519d-42c5-863c-daeaea31d938": Phase="Pending", Reason="", readiness=false. Elapsed: 6.038009488s
Feb  3 13:49:05.305: INFO: Pod "downwardapi-volume-9062ee71-519d-42c5-863c-daeaea31d938": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.051460198s
STEP: Saw pod success
Feb  3 13:49:05.305: INFO: Pod "downwardapi-volume-9062ee71-519d-42c5-863c-daeaea31d938" satisfied condition "success or failure"
Feb  3 13:49:05.312: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-9062ee71-519d-42c5-863c-daeaea31d938 container client-container: 
STEP: delete the pod
Feb  3 13:49:05.351: INFO: Waiting for pod downwardapi-volume-9062ee71-519d-42c5-863c-daeaea31d938 to disappear
Feb  3 13:49:05.406: INFO: Pod downwardapi-volume-9062ee71-519d-42c5-863c-daeaea31d938 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 13:49:05.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9015" for this suite.
Feb  3 13:49:11.443: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 13:49:11.613: INFO: namespace downward-api-9015 deletion completed in 6.193629056s

• [SLOW TEST:14.550 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 13:49:11.614: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Feb  3 13:49:20.544: INFO: Successfully updated pod "annotationupdate5e658741-56af-4d79-b02f-a4ab1060913b"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 13:49:22.677: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5831" for this suite.
Feb  3 13:49:44.738: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 13:49:44.861: INFO: namespace downward-api-5831 deletion completed in 22.163620149s

• [SLOW TEST:33.248 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 13:49:44.862: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test substitution in container's command
Feb  3 13:49:44.971: INFO: Waiting up to 5m0s for pod "var-expansion-51233fab-5f25-4d7c-86c4-7aff4064ed5b" in namespace "var-expansion-5002" to be "success or failure"
Feb  3 13:49:44.977: INFO: Pod "var-expansion-51233fab-5f25-4d7c-86c4-7aff4064ed5b": Phase="Pending", Reason="", readiness=false. Elapsed: 5.65744ms
Feb  3 13:49:46.988: INFO: Pod "var-expansion-51233fab-5f25-4d7c-86c4-7aff4064ed5b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016962288s
Feb  3 13:49:49.002: INFO: Pod "var-expansion-51233fab-5f25-4d7c-86c4-7aff4064ed5b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031330214s
Feb  3 13:49:51.011: INFO: Pod "var-expansion-51233fab-5f25-4d7c-86c4-7aff4064ed5b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.040444078s
Feb  3 13:49:53.017: INFO: Pod "var-expansion-51233fab-5f25-4d7c-86c4-7aff4064ed5b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.046543923s
STEP: Saw pod success
Feb  3 13:49:53.018: INFO: Pod "var-expansion-51233fab-5f25-4d7c-86c4-7aff4064ed5b" satisfied condition "success or failure"
Feb  3 13:49:53.020: INFO: Trying to get logs from node iruya-node pod var-expansion-51233fab-5f25-4d7c-86c4-7aff4064ed5b container dapi-container: 
STEP: delete the pod
Feb  3 13:49:53.103: INFO: Waiting for pod var-expansion-51233fab-5f25-4d7c-86c4-7aff4064ed5b to disappear
Feb  3 13:49:53.114: INFO: Pod var-expansion-51233fab-5f25-4d7c-86c4-7aff4064ed5b no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 13:49:53.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-5002" for this suite.
Feb  3 13:49:59.150: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 13:49:59.263: INFO: namespace var-expansion-5002 deletion completed in 6.142655758s

• [SLOW TEST:14.401 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 13:49:59.263: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
Feb  3 13:49:59.363: INFO: observed the pod list
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 13:50:16.564: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-3500" for this suite.
Feb  3 13:50:22.599: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 13:50:22.690: INFO: namespace pods-3500 deletion completed in 6.118478732s

• [SLOW TEST:23.427 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 13:50:22.691: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: expected 0 pods, got 2 pods
STEP: Gathering metrics
W0203 13:50:26.025749       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb  3 13:50:26.025: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 13:50:26.026: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-2523" for this suite.
Feb  3 13:50:32.050: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 13:50:32.181: INFO: namespace gc-2523 deletion completed in 6.15126889s

• [SLOW TEST:9.491 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run rc 
  should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 13:50:32.183: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1456
[It] should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb  3 13:50:32.265: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-6531'
Feb  3 13:50:34.388: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb  3 13:50:34.389: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created
STEP: confirm that you can get logs from an rc
Feb  3 13:50:34.487: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-rp6pp]
Feb  3 13:50:34.487: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-rp6pp" in namespace "kubectl-6531" to be "running and ready"
Feb  3 13:50:34.492: INFO: Pod "e2e-test-nginx-rc-rp6pp": Phase="Pending", Reason="", readiness=false. Elapsed: 4.5158ms
Feb  3 13:50:36.514: INFO: Pod "e2e-test-nginx-rc-rp6pp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027203242s
Feb  3 13:50:38.532: INFO: Pod "e2e-test-nginx-rc-rp6pp": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04449248s
Feb  3 13:50:40.544: INFO: Pod "e2e-test-nginx-rc-rp6pp": Phase="Pending", Reason="", readiness=false. Elapsed: 6.05692826s
Feb  3 13:50:42.563: INFO: Pod "e2e-test-nginx-rc-rp6pp": Phase="Running", Reason="", readiness=true. Elapsed: 8.075847344s
Feb  3 13:50:42.563: INFO: Pod "e2e-test-nginx-rc-rp6pp" satisfied condition "running and ready"
Feb  3 13:50:42.563: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-rp6pp]
Feb  3 13:50:42.564: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=kubectl-6531'
Feb  3 13:50:42.865: INFO: stderr: ""
Feb  3 13:50:42.866: INFO: stdout: ""
[AfterEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1461
Feb  3 13:50:42.866: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-6531'
Feb  3 13:50:43.103: INFO: stderr: ""
Feb  3 13:50:43.103: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 13:50:43.103: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6531" for this suite.
Feb  3 13:51:05.131: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 13:51:05.300: INFO: namespace kubectl-6531 deletion completed in 22.18687403s

• [SLOW TEST:33.117 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create an rc from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 13:51:05.300: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W0203 13:51:46.183081       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb  3 13:51:46.183: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 13:51:46.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-4311" for this suite.
Feb  3 13:51:54.608: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 13:51:55.421: INFO: namespace gc-4311 deletion completed in 9.233228974s

• [SLOW TEST:50.121 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 13:51:55.422: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Feb  3 13:51:55.822: INFO: Pod name pod-release: Found 0 pods out of 1
Feb  3 13:52:00.838: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 13:52:02.938: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-6458" for this suite.
Feb  3 13:52:10.138: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 13:52:10.291: INFO: namespace replication-controller-6458 deletion completed in 6.740185084s

• [SLOW TEST:14.869 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 13:52:10.291: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name s-test-opt-del-c0b44011-7e46-4a26-9c18-e9af08153f1f
STEP: Creating secret with name s-test-opt-upd-8e416b3e-94d4-48e5-b912-587174b149b1
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-c0b44011-7e46-4a26-9c18-e9af08153f1f
STEP: Updating secret s-test-opt-upd-8e416b3e-94d4-48e5-b912-587174b149b1
STEP: Creating secret with name s-test-opt-create-5efee6ce-6dee-4d53-975c-0cb2fc864cd0
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 13:52:31.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2455" for this suite.
Feb  3 13:52:53.125: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 13:52:53.289: INFO: namespace projected-2455 deletion completed in 22.189769538s

• [SLOW TEST:42.998 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 13:52:53.289: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4755.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4755.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb  3 13:53:05.407: INFO: Unable to read wheezy_udp@kubernetes.default.svc.cluster.local from pod dns-4755/dns-test-3fb07134-884b-4037-9745-60b2a0993e83: the server could not find the requested resource (get pods dns-test-3fb07134-884b-4037-9745-60b2a0993e83)
Feb  3 13:53:05.422: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-4755/dns-test-3fb07134-884b-4037-9745-60b2a0993e83: the server could not find the requested resource (get pods dns-test-3fb07134-884b-4037-9745-60b2a0993e83)
Feb  3 13:53:05.430: INFO: Unable to read wheezy_udp@PodARecord from pod dns-4755/dns-test-3fb07134-884b-4037-9745-60b2a0993e83: the server could not find the requested resource (get pods dns-test-3fb07134-884b-4037-9745-60b2a0993e83)
Feb  3 13:53:05.437: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-4755/dns-test-3fb07134-884b-4037-9745-60b2a0993e83: the server could not find the requested resource (get pods dns-test-3fb07134-884b-4037-9745-60b2a0993e83)
Feb  3 13:53:05.442: INFO: Unable to read jessie_udp@kubernetes.default.svc.cluster.local from pod dns-4755/dns-test-3fb07134-884b-4037-9745-60b2a0993e83: the server could not find the requested resource (get pods dns-test-3fb07134-884b-4037-9745-60b2a0993e83)
Feb  3 13:53:05.448: INFO: Unable to read jessie_tcp@kubernetes.default.svc.cluster.local from pod dns-4755/dns-test-3fb07134-884b-4037-9745-60b2a0993e83: the server could not find the requested resource (get pods dns-test-3fb07134-884b-4037-9745-60b2a0993e83)
Feb  3 13:53:05.453: INFO: Unable to read jessie_udp@PodARecord from pod dns-4755/dns-test-3fb07134-884b-4037-9745-60b2a0993e83: the server could not find the requested resource (get pods dns-test-3fb07134-884b-4037-9745-60b2a0993e83)
Feb  3 13:53:05.457: INFO: Unable to read jessie_tcp@PodARecord from pod dns-4755/dns-test-3fb07134-884b-4037-9745-60b2a0993e83: the server could not find the requested resource (get pods dns-test-3fb07134-884b-4037-9745-60b2a0993e83)
Feb  3 13:53:05.457: INFO: Lookups using dns-4755/dns-test-3fb07134-884b-4037-9745-60b2a0993e83 failed for: [wheezy_udp@kubernetes.default.svc.cluster.local wheezy_tcp@kubernetes.default.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@kubernetes.default.svc.cluster.local jessie_tcp@kubernetes.default.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord]

Feb  3 13:53:10.528: INFO: DNS probes using dns-4755/dns-test-3fb07134-884b-4037-9745-60b2a0993e83 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 13:53:10.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-4755" for this suite.
Feb  3 13:53:16.659: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 13:53:16.882: INFO: namespace dns-4755 deletion completed in 6.245433124s

• [SLOW TEST:23.593 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 13:53:16.883: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:167
[It] should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating server pod server in namespace prestop-5704
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace prestop-5704
STEP: Deleting pre-stop pod
Feb  3 13:53:38.604: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 13:53:38.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "prestop-5704" for this suite.
Feb  3 13:54:18.646: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 13:54:18.755: INFO: namespace prestop-5704 deletion completed in 40.128122324s

• [SLOW TEST:61.872 seconds]
[k8s.io] [sig-node] PreStop
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 13:54:18.755: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-8983
[It] Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-8983
STEP: Creating statefulset with conflicting port in namespace statefulset-8983
STEP: Waiting until pod test-pod will start running in namespace statefulset-8983
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-8983
Feb  3 13:54:28.954: INFO: Observed stateful pod in namespace: statefulset-8983, name: ss-0, uid: eef0f559-3845-4e6c-a29a-d7fe58615a5d, status phase: Pending. Waiting for statefulset controller to delete.
Feb  3 13:54:29.481: INFO: Observed stateful pod in namespace: statefulset-8983, name: ss-0, uid: eef0f559-3845-4e6c-a29a-d7fe58615a5d, status phase: Failed. Waiting for statefulset controller to delete.
Feb  3 13:54:29.508: INFO: Observed stateful pod in namespace: statefulset-8983, name: ss-0, uid: eef0f559-3845-4e6c-a29a-d7fe58615a5d, status phase: Failed. Waiting for statefulset controller to delete.
Feb  3 13:54:29.543: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-8983
STEP: Removing pod with conflicting port in namespace statefulset-8983
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-8983 and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Feb  3 13:54:39.656: INFO: Deleting all statefulset in ns statefulset-8983
Feb  3 13:54:39.662: INFO: Scaling statefulset ss to 0
Feb  3 13:54:49.704: INFO: Waiting for statefulset status.replicas updated to 0
Feb  3 13:54:49.711: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 13:54:49.747: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-8983" for this suite.
Feb  3 13:54:55.818: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 13:54:55.990: INFO: namespace statefulset-8983 deletion completed in 6.236989157s

• [SLOW TEST:37.235 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Should recreate evicted statefulset [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 13:54:55.991: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-9a732b54-912f-481e-aee1-809e339940e6
STEP: Creating a pod to test consume secrets
Feb  3 13:54:56.108: INFO: Waiting up to 5m0s for pod "pod-secrets-d0afcd72-3ebd-488c-b722-9c6f9c596580" in namespace "secrets-9387" to be "success or failure"
Feb  3 13:54:56.118: INFO: Pod "pod-secrets-d0afcd72-3ebd-488c-b722-9c6f9c596580": Phase="Pending", Reason="", readiness=false. Elapsed: 9.711488ms
Feb  3 13:54:58.128: INFO: Pod "pod-secrets-d0afcd72-3ebd-488c-b722-9c6f9c596580": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019320487s
Feb  3 13:55:00.135: INFO: Pod "pod-secrets-d0afcd72-3ebd-488c-b722-9c6f9c596580": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027241614s
Feb  3 13:55:02.151: INFO: Pod "pod-secrets-d0afcd72-3ebd-488c-b722-9c6f9c596580": Phase="Pending", Reason="", readiness=false. Elapsed: 6.042544487s
Feb  3 13:55:04.158: INFO: Pod "pod-secrets-d0afcd72-3ebd-488c-b722-9c6f9c596580": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.049978219s
STEP: Saw pod success
Feb  3 13:55:04.158: INFO: Pod "pod-secrets-d0afcd72-3ebd-488c-b722-9c6f9c596580" satisfied condition "success or failure"
Feb  3 13:55:04.163: INFO: Trying to get logs from node iruya-node pod pod-secrets-d0afcd72-3ebd-488c-b722-9c6f9c596580 container secret-volume-test: 
STEP: delete the pod
Feb  3 13:55:04.388: INFO: Waiting for pod pod-secrets-d0afcd72-3ebd-488c-b722-9c6f9c596580 to disappear
Feb  3 13:55:04.395: INFO: Pod pod-secrets-d0afcd72-3ebd-488c-b722-9c6f9c596580 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 13:55:04.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9387" for this suite.
Feb  3 13:55:10.482: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 13:55:10.599: INFO: namespace secrets-9387 deletion completed in 6.153509023s

• [SLOW TEST:14.608 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-network] DNS 
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 13:55:10.600: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-961.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-961.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-961.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-961.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-961.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-961.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe /etc/hosts
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb  3 13:55:23.466: INFO: Unable to read wheezy_udp@PodARecord from pod dns-961/dns-test-7c5f77ba-6d48-4326-ba28-50a08c9de961: the server could not find the requested resource (get pods dns-test-7c5f77ba-6d48-4326-ba28-50a08c9de961)
Feb  3 13:55:23.509: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-961/dns-test-7c5f77ba-6d48-4326-ba28-50a08c9de961: the server could not find the requested resource (get pods dns-test-7c5f77ba-6d48-4326-ba28-50a08c9de961)
Feb  3 13:55:23.523: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.dns-961.svc.cluster.local from pod dns-961/dns-test-7c5f77ba-6d48-4326-ba28-50a08c9de961: the server could not find the requested resource (get pods dns-test-7c5f77ba-6d48-4326-ba28-50a08c9de961)
Feb  3 13:55:23.530: INFO: Unable to read jessie_hosts@dns-querier-1 from pod dns-961/dns-test-7c5f77ba-6d48-4326-ba28-50a08c9de961: the server could not find the requested resource (get pods dns-test-7c5f77ba-6d48-4326-ba28-50a08c9de961)
Feb  3 13:55:23.534: INFO: Unable to read jessie_udp@PodARecord from pod dns-961/dns-test-7c5f77ba-6d48-4326-ba28-50a08c9de961: the server could not find the requested resource (get pods dns-test-7c5f77ba-6d48-4326-ba28-50a08c9de961)
Feb  3 13:55:23.539: INFO: Unable to read jessie_tcp@PodARecord from pod dns-961/dns-test-7c5f77ba-6d48-4326-ba28-50a08c9de961: the server could not find the requested resource (get pods dns-test-7c5f77ba-6d48-4326-ba28-50a08c9de961)
Feb  3 13:55:23.539: INFO: Lookups using dns-961/dns-test-7c5f77ba-6d48-4326-ba28-50a08c9de961 failed for: [wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_hosts@dns-querier-1.dns-test-service.dns-961.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord]

Feb  3 13:55:28.600: INFO: DNS probes using dns-961/dns-test-7c5f77ba-6d48-4326-ba28-50a08c9de961 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 13:55:28.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-961" for this suite.
Feb  3 13:55:34.690: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 13:55:34.808: INFO: namespace dns-961 deletion completed in 6.160104649s

• [SLOW TEST:24.209 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 13:55:34.809: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-map-89edc413-af03-4a6f-8c17-9f631d3232cb
STEP: Creating a pod to test consume secrets
Feb  3 13:55:34.945: INFO: Waiting up to 5m0s for pod "pod-secrets-893ed260-b248-4317-88a1-72307b9fb9ce" in namespace "secrets-4907" to be "success or failure"
Feb  3 13:55:34.959: INFO: Pod "pod-secrets-893ed260-b248-4317-88a1-72307b9fb9ce": Phase="Pending", Reason="", readiness=false. Elapsed: 13.420491ms
Feb  3 13:55:36.966: INFO: Pod "pod-secrets-893ed260-b248-4317-88a1-72307b9fb9ce": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021026856s
Feb  3 13:55:38.979: INFO: Pod "pod-secrets-893ed260-b248-4317-88a1-72307b9fb9ce": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033305349s
Feb  3 13:55:41.009: INFO: Pod "pod-secrets-893ed260-b248-4317-88a1-72307b9fb9ce": Phase="Pending", Reason="", readiness=false. Elapsed: 6.063732479s
Feb  3 13:55:43.023: INFO: Pod "pod-secrets-893ed260-b248-4317-88a1-72307b9fb9ce": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.077540095s
STEP: Saw pod success
Feb  3 13:55:43.023: INFO: Pod "pod-secrets-893ed260-b248-4317-88a1-72307b9fb9ce" satisfied condition "success or failure"
Feb  3 13:55:43.028: INFO: Trying to get logs from node iruya-node pod pod-secrets-893ed260-b248-4317-88a1-72307b9fb9ce container secret-volume-test: 
STEP: delete the pod
Feb  3 13:55:43.119: INFO: Waiting for pod pod-secrets-893ed260-b248-4317-88a1-72307b9fb9ce to disappear
Feb  3 13:55:43.126: INFO: Pod pod-secrets-893ed260-b248-4317-88a1-72307b9fb9ce no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 13:55:43.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4907" for this suite.
Feb  3 13:55:49.216: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 13:55:49.368: INFO: namespace secrets-4907 deletion completed in 6.20166339s

• [SLOW TEST:14.559 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 13:55:49.369: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Feb  3 13:55:49.452: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 13:56:06.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-762" for this suite.
Feb  3 13:56:28.131: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 13:56:28.267: INFO: namespace init-container-762 deletion completed in 22.179421902s

• [SLOW TEST:38.898 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 13:56:28.267: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 13:56:36.446: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-5226" for this suite.
Feb  3 13:57:28.486: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 13:57:28.639: INFO: namespace kubelet-test-5226 deletion completed in 52.183344387s

• [SLOW TEST:60.372 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a read only busybox container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187
    should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 13:57:28.639: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-1b9176a3-1383-4bdd-b45d-3749ea537dfe
STEP: Creating a pod to test consume secrets
Feb  3 13:57:28.754: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-ce091618-13d9-4345-947d-5f8473658bb7" in namespace "projected-7814" to be "success or failure"
Feb  3 13:57:28.762: INFO: Pod "pod-projected-secrets-ce091618-13d9-4345-947d-5f8473658bb7": Phase="Pending", Reason="", readiness=false. Elapsed: 7.449323ms
Feb  3 13:57:30.770: INFO: Pod "pod-projected-secrets-ce091618-13d9-4345-947d-5f8473658bb7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015816847s
Feb  3 13:57:32.781: INFO: Pod "pod-projected-secrets-ce091618-13d9-4345-947d-5f8473658bb7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027090297s
Feb  3 13:57:34.795: INFO: Pod "pod-projected-secrets-ce091618-13d9-4345-947d-5f8473658bb7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041084433s
Feb  3 13:57:36.803: INFO: Pod "pod-projected-secrets-ce091618-13d9-4345-947d-5f8473658bb7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.049037678s
STEP: Saw pod success
Feb  3 13:57:36.803: INFO: Pod "pod-projected-secrets-ce091618-13d9-4345-947d-5f8473658bb7" satisfied condition "success or failure"
Feb  3 13:57:36.807: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-ce091618-13d9-4345-947d-5f8473658bb7 container projected-secret-volume-test: 
STEP: delete the pod
Feb  3 13:57:36.937: INFO: Waiting for pod pod-projected-secrets-ce091618-13d9-4345-947d-5f8473658bb7 to disappear
Feb  3 13:57:36.946: INFO: Pod pod-projected-secrets-ce091618-13d9-4345-947d-5f8473658bb7 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 13:57:36.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7814" for this suite.
Feb  3 13:57:42.977: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 13:57:43.075: INFO: namespace projected-7814 deletion completed in 6.121373529s

• [SLOW TEST:14.436 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 13:57:43.076: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating pod
Feb  3 13:57:51.184: INFO: Pod pod-hostip-21e0fd3d-0eb8-4fcf-8306-a97c22d21912 has hostIP: 10.96.3.65
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 13:57:51.184: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-1052" for this suite.
Feb  3 13:58:13.209: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 13:58:13.325: INFO: namespace pods-1052 deletion completed in 22.135313869s

• [SLOW TEST:30.249 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 13:58:13.326: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb  3 13:58:13.471: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d13b0151-738c-4cae-a512-21451fdcd194" in namespace "downward-api-7554" to be "success or failure"
Feb  3 13:58:13.503: INFO: Pod "downwardapi-volume-d13b0151-738c-4cae-a512-21451fdcd194": Phase="Pending", Reason="", readiness=false. Elapsed: 32.349268ms
Feb  3 13:58:15.531: INFO: Pod "downwardapi-volume-d13b0151-738c-4cae-a512-21451fdcd194": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059922368s
Feb  3 13:58:17.545: INFO: Pod "downwardapi-volume-d13b0151-738c-4cae-a512-21451fdcd194": Phase="Pending", Reason="", readiness=false. Elapsed: 4.07398836s
Feb  3 13:58:19.560: INFO: Pod "downwardapi-volume-d13b0151-738c-4cae-a512-21451fdcd194": Phase="Pending", Reason="", readiness=false. Elapsed: 6.089353451s
Feb  3 13:58:21.571: INFO: Pod "downwardapi-volume-d13b0151-738c-4cae-a512-21451fdcd194": Phase="Pending", Reason="", readiness=false. Elapsed: 8.100094805s
Feb  3 13:58:23.579: INFO: Pod "downwardapi-volume-d13b0151-738c-4cae-a512-21451fdcd194": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.108353326s
STEP: Saw pod success
Feb  3 13:58:23.579: INFO: Pod "downwardapi-volume-d13b0151-738c-4cae-a512-21451fdcd194" satisfied condition "success or failure"
Feb  3 13:58:23.584: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-d13b0151-738c-4cae-a512-21451fdcd194 container client-container: 
STEP: delete the pod
Feb  3 13:58:23.690: INFO: Waiting for pod downwardapi-volume-d13b0151-738c-4cae-a512-21451fdcd194 to disappear
Feb  3 13:58:23.697: INFO: Pod downwardapi-volume-d13b0151-738c-4cae-a512-21451fdcd194 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 13:58:23.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7554" for this suite.
Feb  3 13:58:29.791: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 13:58:29.900: INFO: namespace downward-api-7554 deletion completed in 6.196831359s

• [SLOW TEST:16.574 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 13:58:29.901: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb  3 13:58:29.943: INFO: Creating deployment "test-recreate-deployment"
Feb  3 13:58:29.952: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Feb  3 13:58:29.984: INFO: deployment "test-recreate-deployment" doesn't have the required revision set
Feb  3 13:58:32.036: INFO: Waiting deployment "test-recreate-deployment" to complete
Feb  3 13:58:32.042: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716335110, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716335110, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716335110, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716335109, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  3 13:58:34.050: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716335110, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716335110, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716335110, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716335109, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  3 13:58:36.059: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716335110, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716335110, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716335110, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716335109, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  3 13:58:38.049: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Feb  3 13:58:38.062: INFO: Updating deployment test-recreate-deployment
Feb  3 13:58:38.063: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Feb  3 13:58:38.366: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:deployment-549,SelfLink:/apis/apps/v1/namespaces/deployment-549/deployments/test-recreate-deployment,UID:563b28a3-56c5-48a5-abe4-4847135ac38e,ResourceVersion:22947027,Generation:2,CreationTimestamp:2020-02-03 13:58:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-02-03 13:58:38 +0000 UTC 2020-02-03 13:58:38 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-02-03 13:58:38 +0000 UTC 2020-02-03 13:58:29 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-5c8c9cc69d" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},}

Feb  3 13:58:38.380: INFO: New ReplicaSet "test-recreate-deployment-5c8c9cc69d" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d,GenerateName:,Namespace:deployment-549,SelfLink:/apis/apps/v1/namespaces/deployment-549/replicasets/test-recreate-deployment-5c8c9cc69d,UID:2e6e5d04-0270-4f97-83f7-0e7dadf4aa99,ResourceVersion:22947026,Generation:1,CreationTimestamp:2020-02-03 13:58:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 563b28a3-56c5-48a5-abe4-4847135ac38e 0xc001e72fe7 0xc001e72fe8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb  3 13:58:38.380: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Feb  3 13:58:38.381: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-6df85df6b9,GenerateName:,Namespace:deployment-549,SelfLink:/apis/apps/v1/namespaces/deployment-549/replicasets/test-recreate-deployment-6df85df6b9,UID:9cd9f10b-0197-42c4-9c62-e174cc0c85e3,ResourceVersion:22947016,Generation:2,CreationTimestamp:2020-02-03 13:58:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 563b28a3-56c5-48a5-abe4-4847135ac38e 0xc001e730b7 0xc001e730b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb  3 13:58:38.386: INFO: Pod "test-recreate-deployment-5c8c9cc69d-wc5hh" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d-wc5hh,GenerateName:test-recreate-deployment-5c8c9cc69d-,Namespace:deployment-549,SelfLink:/api/v1/namespaces/deployment-549/pods/test-recreate-deployment-5c8c9cc69d-wc5hh,UID:29f33cf9-ee1c-4b47-974f-ab3bf0cc2095,ResourceVersion:22947028,Generation:0,CreationTimestamp:2020-02-03 13:58:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-5c8c9cc69d 2e6e5d04-0270-4f97-83f7-0e7dadf4aa99 0xc00134f097 0xc00134f098}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-swjnv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-swjnv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-swjnv true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00134f110} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00134f130}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 13:58:38 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 13:58:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 13:58:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 13:58:38 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-02-03 13:58:38 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 13:58:38.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-549" for this suite.
Feb  3 13:58:44.409: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 13:58:44.516: INFO: namespace deployment-549 deletion completed in 6.126268203s

• [SLOW TEST:14.615 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 13:58:44.517: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb  3 13:58:44.726: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f4fccfa4-97a5-4bcf-8665-bd94c9581261" in namespace "projected-7412" to be "success or failure"
Feb  3 13:58:44.739: INFO: Pod "downwardapi-volume-f4fccfa4-97a5-4bcf-8665-bd94c9581261": Phase="Pending", Reason="", readiness=false. Elapsed: 13.10346ms
Feb  3 13:58:46.748: INFO: Pod "downwardapi-volume-f4fccfa4-97a5-4bcf-8665-bd94c9581261": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021762563s
Feb  3 13:58:48.759: INFO: Pod "downwardapi-volume-f4fccfa4-97a5-4bcf-8665-bd94c9581261": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032395419s
Feb  3 13:58:50.770: INFO: Pod "downwardapi-volume-f4fccfa4-97a5-4bcf-8665-bd94c9581261": Phase="Pending", Reason="", readiness=false. Elapsed: 6.043285406s
Feb  3 13:58:52.784: INFO: Pod "downwardapi-volume-f4fccfa4-97a5-4bcf-8665-bd94c9581261": Phase="Pending", Reason="", readiness=false. Elapsed: 8.057518286s
Feb  3 13:58:54.791: INFO: Pod "downwardapi-volume-f4fccfa4-97a5-4bcf-8665-bd94c9581261": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.064729827s
STEP: Saw pod success
Feb  3 13:58:54.791: INFO: Pod "downwardapi-volume-f4fccfa4-97a5-4bcf-8665-bd94c9581261" satisfied condition "success or failure"
Feb  3 13:58:54.795: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-f4fccfa4-97a5-4bcf-8665-bd94c9581261 container client-container: 
STEP: delete the pod
Feb  3 13:58:54.853: INFO: Waiting for pod downwardapi-volume-f4fccfa4-97a5-4bcf-8665-bd94c9581261 to disappear
Feb  3 13:58:54.859: INFO: Pod downwardapi-volume-f4fccfa4-97a5-4bcf-8665-bd94c9581261 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 13:58:54.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7412" for this suite.
Feb  3 13:59:00.894: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 13:59:00.964: INFO: namespace projected-7412 deletion completed in 6.093857564s

• [SLOW TEST:16.447 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 13:59:00.965: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating replication controller my-hostname-basic-82d5cf9a-ff54-4fe0-a1b6-eef4bc02b5d1
Feb  3 13:59:01.033: INFO: Pod name my-hostname-basic-82d5cf9a-ff54-4fe0-a1b6-eef4bc02b5d1: Found 0 pods out of 1
Feb  3 13:59:06.043: INFO: Pod name my-hostname-basic-82d5cf9a-ff54-4fe0-a1b6-eef4bc02b5d1: Found 1 pods out of 1
Feb  3 13:59:06.043: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-82d5cf9a-ff54-4fe0-a1b6-eef4bc02b5d1" are running
Feb  3 13:59:10.058: INFO: Pod "my-hostname-basic-82d5cf9a-ff54-4fe0-a1b6-eef4bc02b5d1-8w7k8" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-03 13:59:01 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-03 13:59:01 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-82d5cf9a-ff54-4fe0-a1b6-eef4bc02b5d1]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-03 13:59:01 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-82d5cf9a-ff54-4fe0-a1b6-eef4bc02b5d1]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-03 13:59:01 +0000 UTC Reason: Message:}])
Feb  3 13:59:10.058: INFO: Trying to dial the pod
Feb  3 13:59:15.082: INFO: Controller my-hostname-basic-82d5cf9a-ff54-4fe0-a1b6-eef4bc02b5d1: Got expected result from replica 1 [my-hostname-basic-82d5cf9a-ff54-4fe0-a1b6-eef4bc02b5d1-8w7k8]: "my-hostname-basic-82d5cf9a-ff54-4fe0-a1b6-eef4bc02b5d1-8w7k8", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 13:59:15.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-6008" for this suite.
Feb  3 13:59:21.126: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 13:59:21.255: INFO: namespace replication-controller-6008 deletion completed in 6.168569825s

• [SLOW TEST:20.291 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 13:59:21.256: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Feb  3 13:59:41.465: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  3 13:59:41.527: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  3 13:59:43.528: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  3 13:59:43.552: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  3 13:59:45.528: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  3 13:59:45.538: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  3 13:59:47.528: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  3 13:59:47.538: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  3 13:59:49.528: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  3 13:59:49.539: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  3 13:59:51.528: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  3 13:59:51.548: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  3 13:59:53.528: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  3 13:59:53.767: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  3 13:59:55.528: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  3 13:59:55.539: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  3 13:59:57.528: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  3 13:59:57.554: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  3 13:59:59.528: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  3 13:59:59.536: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  3 14:00:01.528: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  3 14:00:01.536: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  3 14:00:03.528: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  3 14:00:03.535: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  3 14:00:05.528: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  3 14:00:05.536: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  3 14:00:07.528: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  3 14:00:07.537: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 14:00:07.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-8840" for this suite.
Feb  3 14:00:29.603: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 14:00:29.821: INFO: namespace container-lifecycle-hook-8840 deletion completed in 22.245926209s

• [SLOW TEST:68.566 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 14:00:29.824: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Feb  3 14:00:30.004: INFO: Waiting up to 5m0s for pod "downward-api-51d74403-12d6-4f3c-b408-510f53c8a7f9" in namespace "downward-api-5681" to be "success or failure"
Feb  3 14:00:30.024: INFO: Pod "downward-api-51d74403-12d6-4f3c-b408-510f53c8a7f9": Phase="Pending", Reason="", readiness=false. Elapsed: 20.105732ms
Feb  3 14:00:32.034: INFO: Pod "downward-api-51d74403-12d6-4f3c-b408-510f53c8a7f9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029153314s
Feb  3 14:00:34.047: INFO: Pod "downward-api-51d74403-12d6-4f3c-b408-510f53c8a7f9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042488168s
Feb  3 14:00:36.057: INFO: Pod "downward-api-51d74403-12d6-4f3c-b408-510f53c8a7f9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.052553024s
Feb  3 14:00:38.068: INFO: Pod "downward-api-51d74403-12d6-4f3c-b408-510f53c8a7f9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.063167166s
STEP: Saw pod success
Feb  3 14:00:38.068: INFO: Pod "downward-api-51d74403-12d6-4f3c-b408-510f53c8a7f9" satisfied condition "success or failure"
Feb  3 14:00:38.071: INFO: Trying to get logs from node iruya-node pod downward-api-51d74403-12d6-4f3c-b408-510f53c8a7f9 container dapi-container: 
STEP: delete the pod
Feb  3 14:00:38.152: INFO: Waiting for pod downward-api-51d74403-12d6-4f3c-b408-510f53c8a7f9 to disappear
Feb  3 14:00:38.185: INFO: Pod downward-api-51d74403-12d6-4f3c-b408-510f53c8a7f9 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 14:00:38.185: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5681" for this suite.
Feb  3 14:00:44.261: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 14:00:44.400: INFO: namespace downward-api-5681 deletion completed in 6.167768832s

• [SLOW TEST:14.577 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 14:00:44.401: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Feb  3 14:00:44.568: INFO: Number of nodes with available pods: 0
Feb  3 14:00:44.568: INFO: Node iruya-node is running more than one daemon pod
Feb  3 14:00:45.587: INFO: Number of nodes with available pods: 0
Feb  3 14:00:45.587: INFO: Node iruya-node is running more than one daemon pod
Feb  3 14:00:47.063: INFO: Number of nodes with available pods: 0
Feb  3 14:00:47.063: INFO: Node iruya-node is running more than one daemon pod
Feb  3 14:00:47.601: INFO: Number of nodes with available pods: 0
Feb  3 14:00:47.601: INFO: Node iruya-node is running more than one daemon pod
Feb  3 14:00:48.613: INFO: Number of nodes with available pods: 0
Feb  3 14:00:48.614: INFO: Node iruya-node is running more than one daemon pod
Feb  3 14:00:50.000: INFO: Number of nodes with available pods: 0
Feb  3 14:00:50.000: INFO: Node iruya-node is running more than one daemon pod
Feb  3 14:00:50.589: INFO: Number of nodes with available pods: 0
Feb  3 14:00:50.589: INFO: Node iruya-node is running more than one daemon pod
Feb  3 14:00:51.660: INFO: Number of nodes with available pods: 0
Feb  3 14:00:51.660: INFO: Node iruya-node is running more than one daemon pod
Feb  3 14:00:52.579: INFO: Number of nodes with available pods: 0
Feb  3 14:00:52.579: INFO: Node iruya-node is running more than one daemon pod
Feb  3 14:00:53.585: INFO: Number of nodes with available pods: 2
Feb  3 14:00:53.586: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Stop a daemon pod, check that the daemon pod is revived.
Feb  3 14:00:53.630: INFO: Number of nodes with available pods: 1
Feb  3 14:00:53.630: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb  3 14:00:54.662: INFO: Number of nodes with available pods: 1
Feb  3 14:00:54.662: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb  3 14:00:56.607: INFO: Number of nodes with available pods: 1
Feb  3 14:00:56.608: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb  3 14:00:56.678: INFO: Number of nodes with available pods: 1
Feb  3 14:00:56.678: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb  3 14:00:57.650: INFO: Number of nodes with available pods: 1
Feb  3 14:00:57.650: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb  3 14:00:58.693: INFO: Number of nodes with available pods: 1
Feb  3 14:00:58.693: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb  3 14:00:59.648: INFO: Number of nodes with available pods: 1
Feb  3 14:00:59.648: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb  3 14:01:00.645: INFO: Number of nodes with available pods: 1
Feb  3 14:01:00.645: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb  3 14:01:01.903: INFO: Number of nodes with available pods: 1
Feb  3 14:01:01.903: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb  3 14:01:03.111: INFO: Number of nodes with available pods: 1
Feb  3 14:01:03.112: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb  3 14:01:03.649: INFO: Number of nodes with available pods: 1
Feb  3 14:01:03.649: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb  3 14:01:04.746: INFO: Number of nodes with available pods: 1
Feb  3 14:01:04.746: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb  3 14:01:06.122: INFO: Number of nodes with available pods: 1
Feb  3 14:01:06.122: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb  3 14:01:07.019: INFO: Number of nodes with available pods: 1
Feb  3 14:01:07.019: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb  3 14:01:07.648: INFO: Number of nodes with available pods: 1
Feb  3 14:01:07.648: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb  3 14:01:08.646: INFO: Number of nodes with available pods: 1
Feb  3 14:01:08.646: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb  3 14:01:09.648: INFO: Number of nodes with available pods: 2
Feb  3 14:01:09.648: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5145, will wait for the garbage collector to delete the pods
Feb  3 14:01:09.725: INFO: Deleting DaemonSet.extensions daemon-set took: 17.261929ms
Feb  3 14:01:10.025: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.400069ms
Feb  3 14:01:27.993: INFO: Number of nodes with available pods: 0
Feb  3 14:01:27.993: INFO: Number of running nodes: 0, number of available pods: 0
Feb  3 14:01:27.999: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5145/daemonsets","resourceVersion":"22947448"},"items":null}

Feb  3 14:01:28.007: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5145/pods","resourceVersion":"22947448"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 14:01:28.026: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-5145" for this suite.
Feb  3 14:01:34.058: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 14:01:34.124: INFO: namespace daemonsets-5145 deletion completed in 6.093117139s

• [SLOW TEST:49.724 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 14:01:34.125: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-5f96d424-2344-4e72-94d5-cbaeee6e70d4
STEP: Creating a pod to test consume configMaps
Feb  3 14:01:34.323: INFO: Waiting up to 5m0s for pod "pod-configmaps-48941cce-f474-4645-b419-5bcba177ca75" in namespace "configmap-5982" to be "success or failure"
Feb  3 14:01:34.368: INFO: Pod "pod-configmaps-48941cce-f474-4645-b419-5bcba177ca75": Phase="Pending", Reason="", readiness=false. Elapsed: 43.927514ms
Feb  3 14:01:36.383: INFO: Pod "pod-configmaps-48941cce-f474-4645-b419-5bcba177ca75": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058643513s
Feb  3 14:01:38.391: INFO: Pod "pod-configmaps-48941cce-f474-4645-b419-5bcba177ca75": Phase="Pending", Reason="", readiness=false. Elapsed: 4.067477342s
Feb  3 14:01:40.403: INFO: Pod "pod-configmaps-48941cce-f474-4645-b419-5bcba177ca75": Phase="Pending", Reason="", readiness=false. Elapsed: 6.079018678s
Feb  3 14:01:42.413: INFO: Pod "pod-configmaps-48941cce-f474-4645-b419-5bcba177ca75": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.088996325s
STEP: Saw pod success
Feb  3 14:01:42.413: INFO: Pod "pod-configmaps-48941cce-f474-4645-b419-5bcba177ca75" satisfied condition "success or failure"
Feb  3 14:01:42.418: INFO: Trying to get logs from node iruya-node pod pod-configmaps-48941cce-f474-4645-b419-5bcba177ca75 container configmap-volume-test: 
STEP: delete the pod
Feb  3 14:01:42.497: INFO: Waiting for pod pod-configmaps-48941cce-f474-4645-b419-5bcba177ca75 to disappear
Feb  3 14:01:42.505: INFO: Pod pod-configmaps-48941cce-f474-4645-b419-5bcba177ca75 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 14:01:42.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5982" for this suite.
Feb  3 14:01:48.574: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 14:01:48.669: INFO: namespace configmap-5982 deletion completed in 6.154457978s

• [SLOW TEST:14.544 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 14:01:48.669: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
Feb  3 14:01:48.808: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-808,SelfLink:/api/v1/namespaces/watch-808/configmaps/e2e-watch-test-resource-version,UID:3dc9140b-d50b-4764-8ab1-d148d9f10bca,ResourceVersion:22947531,Generation:0,CreationTimestamp:2020-02-03 14:01:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb  3 14:01:48.808: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-808,SelfLink:/api/v1/namespaces/watch-808/configmaps/e2e-watch-test-resource-version,UID:3dc9140b-d50b-4764-8ab1-d148d9f10bca,ResourceVersion:22947532,Generation:0,CreationTimestamp:2020-02-03 14:01:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 14:01:48.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-808" for this suite.
Feb  3 14:01:54.843: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 14:01:55.077: INFO: namespace watch-808 deletion completed in 6.260185626s

• [SLOW TEST:6.408 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 14:01:55.077: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-map-ccd68a54-a0e9-4a93-8a29-ba046a5c9edc
STEP: Creating a pod to test consume secrets
Feb  3 14:01:55.322: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-94a3a9e1-2423-4ede-b630-7dbf00a21800" in namespace "projected-4760" to be "success or failure"
Feb  3 14:01:55.484: INFO: Pod "pod-projected-secrets-94a3a9e1-2423-4ede-b630-7dbf00a21800": Phase="Pending", Reason="", readiness=false. Elapsed: 161.843495ms
Feb  3 14:01:57.495: INFO: Pod "pod-projected-secrets-94a3a9e1-2423-4ede-b630-7dbf00a21800": Phase="Pending", Reason="", readiness=false. Elapsed: 2.172892152s
Feb  3 14:01:59.504: INFO: Pod "pod-projected-secrets-94a3a9e1-2423-4ede-b630-7dbf00a21800": Phase="Pending", Reason="", readiness=false. Elapsed: 4.182231933s
Feb  3 14:02:01.514: INFO: Pod "pod-projected-secrets-94a3a9e1-2423-4ede-b630-7dbf00a21800": Phase="Pending", Reason="", readiness=false. Elapsed: 6.192286659s
Feb  3 14:02:03.521: INFO: Pod "pod-projected-secrets-94a3a9e1-2423-4ede-b630-7dbf00a21800": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.199702362s
STEP: Saw pod success
Feb  3 14:02:03.521: INFO: Pod "pod-projected-secrets-94a3a9e1-2423-4ede-b630-7dbf00a21800" satisfied condition "success or failure"
Feb  3 14:02:03.525: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-94a3a9e1-2423-4ede-b630-7dbf00a21800 container projected-secret-volume-test: 
STEP: delete the pod
Feb  3 14:02:03.596: INFO: Waiting for pod pod-projected-secrets-94a3a9e1-2423-4ede-b630-7dbf00a21800 to disappear
Feb  3 14:02:03.646: INFO: Pod pod-projected-secrets-94a3a9e1-2423-4ede-b630-7dbf00a21800 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 14:02:03.646: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4760" for this suite.
Feb  3 14:02:09.706: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 14:02:09.911: INFO: namespace projected-4760 deletion completed in 6.232924745s

• [SLOW TEST:14.834 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 14:02:09.912: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-upd-8c03dd6c-e559-477d-b8cb-71a12c07bb89
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 14:02:20.129: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1777" for this suite.
Feb  3 14:02:42.187: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 14:02:42.333: INFO: namespace configmap-1777 deletion completed in 22.196006827s

• [SLOW TEST:32.420 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 14:02:42.333: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on tmpfs
Feb  3 14:02:42.469: INFO: Waiting up to 5m0s for pod "pod-b335823d-93d1-47a7-90d3-c1b6d8c5c428" in namespace "emptydir-1874" to be "success or failure"
Feb  3 14:02:42.494: INFO: Pod "pod-b335823d-93d1-47a7-90d3-c1b6d8c5c428": Phase="Pending", Reason="", readiness=false. Elapsed: 24.398175ms
Feb  3 14:02:44.520: INFO: Pod "pod-b335823d-93d1-47a7-90d3-c1b6d8c5c428": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050504465s
Feb  3 14:02:46.532: INFO: Pod "pod-b335823d-93d1-47a7-90d3-c1b6d8c5c428": Phase="Pending", Reason="", readiness=false. Elapsed: 4.062400312s
Feb  3 14:02:48.813: INFO: Pod "pod-b335823d-93d1-47a7-90d3-c1b6d8c5c428": Phase="Pending", Reason="", readiness=false. Elapsed: 6.343602584s
Feb  3 14:02:50.825: INFO: Pod "pod-b335823d-93d1-47a7-90d3-c1b6d8c5c428": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.355752752s
STEP: Saw pod success
Feb  3 14:02:50.825: INFO: Pod "pod-b335823d-93d1-47a7-90d3-c1b6d8c5c428" satisfied condition "success or failure"
Feb  3 14:02:50.829: INFO: Trying to get logs from node iruya-node pod pod-b335823d-93d1-47a7-90d3-c1b6d8c5c428 container test-container: 
STEP: delete the pod
Feb  3 14:02:50.968: INFO: Waiting for pod pod-b335823d-93d1-47a7-90d3-c1b6d8c5c428 to disappear
Feb  3 14:02:50.973: INFO: Pod pod-b335823d-93d1-47a7-90d3-c1b6d8c5c428 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 14:02:50.973: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1874" for this suite.
Feb  3 14:02:56.998: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 14:02:57.098: INFO: namespace emptydir-1874 deletion completed in 6.120044882s

• [SLOW TEST:14.765 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 14:02:57.099: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-95298c05-f4db-481d-8a5a-298cef4e1a99
STEP: Creating a pod to test consume secrets
Feb  3 14:02:57.171: INFO: Waiting up to 5m0s for pod "pod-secrets-023510c7-749a-4236-bb19-902f51176257" in namespace "secrets-6518" to be "success or failure"
Feb  3 14:02:57.198: INFO: Pod "pod-secrets-023510c7-749a-4236-bb19-902f51176257": Phase="Pending", Reason="", readiness=false. Elapsed: 26.391087ms
Feb  3 14:02:59.215: INFO: Pod "pod-secrets-023510c7-749a-4236-bb19-902f51176257": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044073246s
Feb  3 14:03:01.223: INFO: Pod "pod-secrets-023510c7-749a-4236-bb19-902f51176257": Phase="Pending", Reason="", readiness=false. Elapsed: 4.051554755s
Feb  3 14:03:03.229: INFO: Pod "pod-secrets-023510c7-749a-4236-bb19-902f51176257": Phase="Pending", Reason="", readiness=false. Elapsed: 6.057994108s
Feb  3 14:03:05.237: INFO: Pod "pod-secrets-023510c7-749a-4236-bb19-902f51176257": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.065929077s
STEP: Saw pod success
Feb  3 14:03:05.237: INFO: Pod "pod-secrets-023510c7-749a-4236-bb19-902f51176257" satisfied condition "success or failure"
Feb  3 14:03:05.241: INFO: Trying to get logs from node iruya-node pod pod-secrets-023510c7-749a-4236-bb19-902f51176257 container secret-volume-test: 
STEP: delete the pod
Feb  3 14:03:05.440: INFO: Waiting for pod pod-secrets-023510c7-749a-4236-bb19-902f51176257 to disappear
Feb  3 14:03:05.449: INFO: Pod pod-secrets-023510c7-749a-4236-bb19-902f51176257 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 14:03:05.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-6518" for this suite.
Feb  3 14:03:11.549: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 14:03:11.666: INFO: namespace secrets-6518 deletion completed in 6.151454634s

• [SLOW TEST:14.568 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 14:03:11.667: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-c7989436-645a-43aa-a3c3-531a7765e4bc
STEP: Creating a pod to test consume configMaps
Feb  3 14:03:11.888: INFO: Waiting up to 5m0s for pod "pod-configmaps-020e888c-9eb2-4410-bf25-2ff169ec23df" in namespace "configmap-9935" to be "success or failure"
Feb  3 14:03:11.903: INFO: Pod "pod-configmaps-020e888c-9eb2-4410-bf25-2ff169ec23df": Phase="Pending", Reason="", readiness=false. Elapsed: 14.815363ms
Feb  3 14:03:13.925: INFO: Pod "pod-configmaps-020e888c-9eb2-4410-bf25-2ff169ec23df": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037338583s
Feb  3 14:03:15.958: INFO: Pod "pod-configmaps-020e888c-9eb2-4410-bf25-2ff169ec23df": Phase="Pending", Reason="", readiness=false. Elapsed: 4.070247955s
Feb  3 14:03:17.968: INFO: Pod "pod-configmaps-020e888c-9eb2-4410-bf25-2ff169ec23df": Phase="Pending", Reason="", readiness=false. Elapsed: 6.080546141s
Feb  3 14:03:19.975: INFO: Pod "pod-configmaps-020e888c-9eb2-4410-bf25-2ff169ec23df": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.087452116s
STEP: Saw pod success
Feb  3 14:03:19.975: INFO: Pod "pod-configmaps-020e888c-9eb2-4410-bf25-2ff169ec23df" satisfied condition "success or failure"
Feb  3 14:03:19.978: INFO: Trying to get logs from node iruya-node pod pod-configmaps-020e888c-9eb2-4410-bf25-2ff169ec23df container configmap-volume-test: 
STEP: delete the pod
Feb  3 14:03:20.074: INFO: Waiting for pod pod-configmaps-020e888c-9eb2-4410-bf25-2ff169ec23df to disappear
Feb  3 14:03:20.128: INFO: Pod pod-configmaps-020e888c-9eb2-4410-bf25-2ff169ec23df no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 14:03:20.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9935" for this suite.
Feb  3 14:03:26.164: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 14:03:26.356: INFO: namespace configmap-9935 deletion completed in 6.213998717s

• [SLOW TEST:14.689 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 14:03:26.357: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: getting the auto-created API token
STEP: reading a file in the container
Feb  3 14:03:35.044: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9698 pod-service-account-211ac81f-b5ea-4e09-9f5b-f2e48ec7d810 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token'
STEP: reading a file in the container
Feb  3 14:03:37.604: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9698 pod-service-account-211ac81f-b5ea-4e09-9f5b-f2e48ec7d810 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt'
STEP: reading a file in the container
Feb  3 14:03:38.040: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9698 pod-service-account-211ac81f-b5ea-4e09-9f5b-f2e48ec7d810 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace'
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 14:03:38.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-9698" for this suite.
Feb  3 14:03:44.545: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 14:03:44.680: INFO: namespace svcaccounts-9698 deletion completed in 6.171313202s

• [SLOW TEST:18.323 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 14:03:44.680: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 14:03:54.857: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-3583" for this suite.
Feb  3 14:04:36.894: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 14:04:37.085: INFO: namespace kubelet-test-3583 deletion completed in 42.217381266s

• [SLOW TEST:52.405 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox Pod with hostAliases
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136
    should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 14:04:37.085: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 14:04:47.257: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-6013" for this suite.
Feb  3 14:05:29.292: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 14:05:29.432: INFO: namespace kubelet-test-6013 deletion completed in 42.167646929s

• [SLOW TEST:52.347 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40
    should print the output to logs [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 14:05:29.433: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb  3 14:05:29.591: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8174be44-2cba-4518-a17d-90c72511adfa" in namespace "downward-api-867" to be "success or failure"
Feb  3 14:05:29.628: INFO: Pod "downwardapi-volume-8174be44-2cba-4518-a17d-90c72511adfa": Phase="Pending", Reason="", readiness=false. Elapsed: 37.078221ms
Feb  3 14:05:32.661: INFO: Pod "downwardapi-volume-8174be44-2cba-4518-a17d-90c72511adfa": Phase="Pending", Reason="", readiness=false. Elapsed: 3.070483744s
Feb  3 14:05:34.670: INFO: Pod "downwardapi-volume-8174be44-2cba-4518-a17d-90c72511adfa": Phase="Pending", Reason="", readiness=false. Elapsed: 5.078779317s
Feb  3 14:05:36.686: INFO: Pod "downwardapi-volume-8174be44-2cba-4518-a17d-90c72511adfa": Phase="Pending", Reason="", readiness=false. Elapsed: 7.094578768s
Feb  3 14:05:38.695: INFO: Pod "downwardapi-volume-8174be44-2cba-4518-a17d-90c72511adfa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.103814798s
STEP: Saw pod success
Feb  3 14:05:38.695: INFO: Pod "downwardapi-volume-8174be44-2cba-4518-a17d-90c72511adfa" satisfied condition "success or failure"
Feb  3 14:05:38.698: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-8174be44-2cba-4518-a17d-90c72511adfa container client-container: 
STEP: delete the pod
Feb  3 14:05:38.769: INFO: Waiting for pod downwardapi-volume-8174be44-2cba-4518-a17d-90c72511adfa to disappear
Feb  3 14:05:38.773: INFO: Pod downwardapi-volume-8174be44-2cba-4518-a17d-90c72511adfa no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 14:05:38.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-867" for this suite.
Feb  3 14:05:44.875: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 14:05:45.033: INFO: namespace downward-api-867 deletion completed in 6.227572702s

• [SLOW TEST:15.600 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 14:05:45.034: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test env composition
Feb  3 14:05:45.133: INFO: Waiting up to 5m0s for pod "var-expansion-5279615e-43da-4ef1-a2fc-938cc7ef0473" in namespace "var-expansion-8776" to be "success or failure"
Feb  3 14:05:45.137: INFO: Pod "var-expansion-5279615e-43da-4ef1-a2fc-938cc7ef0473": Phase="Pending", Reason="", readiness=false. Elapsed: 3.667945ms
Feb  3 14:05:47.146: INFO: Pod "var-expansion-5279615e-43da-4ef1-a2fc-938cc7ef0473": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012620605s
Feb  3 14:05:49.154: INFO: Pod "var-expansion-5279615e-43da-4ef1-a2fc-938cc7ef0473": Phase="Pending", Reason="", readiness=false. Elapsed: 4.020076191s
Feb  3 14:05:51.165: INFO: Pod "var-expansion-5279615e-43da-4ef1-a2fc-938cc7ef0473": Phase="Pending", Reason="", readiness=false. Elapsed: 6.031664345s
Feb  3 14:05:53.175: INFO: Pod "var-expansion-5279615e-43da-4ef1-a2fc-938cc7ef0473": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.041775957s
STEP: Saw pod success
Feb  3 14:05:53.175: INFO: Pod "var-expansion-5279615e-43da-4ef1-a2fc-938cc7ef0473" satisfied condition "success or failure"
Feb  3 14:05:53.179: INFO: Trying to get logs from node iruya-node pod var-expansion-5279615e-43da-4ef1-a2fc-938cc7ef0473 container dapi-container: 
STEP: delete the pod
Feb  3 14:05:53.223: INFO: Waiting for pod var-expansion-5279615e-43da-4ef1-a2fc-938cc7ef0473 to disappear
Feb  3 14:05:53.237: INFO: Pod var-expansion-5279615e-43da-4ef1-a2fc-938cc7ef0473 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 14:05:53.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-8776" for this suite.
Feb  3 14:05:59.356: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 14:05:59.563: INFO: namespace var-expansion-8776 deletion completed in 6.319133415s

• [SLOW TEST:14.529 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 14:05:59.564: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-fdcacd05-9539-46a1-a85c-7a4e873378f5
STEP: Creating a pod to test consume configMaps
Feb  3 14:05:59.711: INFO: Waiting up to 5m0s for pod "pod-configmaps-c1dc7437-f57e-4b0f-aec6-5e2c152f75cb" in namespace "configmap-3263" to be "success or failure"
Feb  3 14:05:59.870: INFO: Pod "pod-configmaps-c1dc7437-f57e-4b0f-aec6-5e2c152f75cb": Phase="Pending", Reason="", readiness=false. Elapsed: 158.920339ms
Feb  3 14:06:01.882: INFO: Pod "pod-configmaps-c1dc7437-f57e-4b0f-aec6-5e2c152f75cb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.170296456s
Feb  3 14:06:03.895: INFO: Pod "pod-configmaps-c1dc7437-f57e-4b0f-aec6-5e2c152f75cb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.183273543s
Feb  3 14:06:05.903: INFO: Pod "pod-configmaps-c1dc7437-f57e-4b0f-aec6-5e2c152f75cb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.191946769s
Feb  3 14:06:07.917: INFO: Pod "pod-configmaps-c1dc7437-f57e-4b0f-aec6-5e2c152f75cb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.205040003s
STEP: Saw pod success
Feb  3 14:06:07.917: INFO: Pod "pod-configmaps-c1dc7437-f57e-4b0f-aec6-5e2c152f75cb" satisfied condition "success or failure"
Feb  3 14:06:07.921: INFO: Trying to get logs from node iruya-node pod pod-configmaps-c1dc7437-f57e-4b0f-aec6-5e2c152f75cb container configmap-volume-test: 
STEP: delete the pod
Feb  3 14:06:08.021: INFO: Waiting for pod pod-configmaps-c1dc7437-f57e-4b0f-aec6-5e2c152f75cb to disappear
Feb  3 14:06:08.110: INFO: Pod pod-configmaps-c1dc7437-f57e-4b0f-aec6-5e2c152f75cb no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 14:06:08.110: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3263" for this suite.
Feb  3 14:06:14.139: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 14:06:14.239: INFO: namespace configmap-3263 deletion completed in 6.121824036s

• [SLOW TEST:14.675 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 14:06:14.239: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-40344c00-a22e-4b7d-9e02-9354f9df5bdb
STEP: Creating a pod to test consume configMaps
Feb  3 14:06:14.356: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-067a4205-5710-4399-801c-6b80889bad0a" in namespace "projected-9814" to be "success or failure"
Feb  3 14:06:14.379: INFO: Pod "pod-projected-configmaps-067a4205-5710-4399-801c-6b80889bad0a": Phase="Pending", Reason="", readiness=false. Elapsed: 22.494197ms
Feb  3 14:06:16.388: INFO: Pod "pod-projected-configmaps-067a4205-5710-4399-801c-6b80889bad0a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031774347s
Feb  3 14:06:18.398: INFO: Pod "pod-projected-configmaps-067a4205-5710-4399-801c-6b80889bad0a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042146336s
Feb  3 14:06:20.408: INFO: Pod "pod-projected-configmaps-067a4205-5710-4399-801c-6b80889bad0a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.05139013s
Feb  3 14:06:22.418: INFO: Pod "pod-projected-configmaps-067a4205-5710-4399-801c-6b80889bad0a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.061458513s
STEP: Saw pod success
Feb  3 14:06:22.418: INFO: Pod "pod-projected-configmaps-067a4205-5710-4399-801c-6b80889bad0a" satisfied condition "success or failure"
Feb  3 14:06:22.425: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-067a4205-5710-4399-801c-6b80889bad0a container projected-configmap-volume-test: 
STEP: delete the pod
Feb  3 14:06:22.487: INFO: Waiting for pod pod-projected-configmaps-067a4205-5710-4399-801c-6b80889bad0a to disappear
Feb  3 14:06:22.493: INFO: Pod pod-projected-configmaps-067a4205-5710-4399-801c-6b80889bad0a no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 14:06:22.493: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9814" for this suite.
Feb  3 14:06:28.539: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 14:06:28.689: INFO: namespace projected-9814 deletion completed in 6.18993722s

• [SLOW TEST:14.450 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 14:06:28.690: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
Feb  3 14:06:28.820: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-2597,SelfLink:/api/v1/namespaces/watch-2597/configmaps/e2e-watch-test-watch-closed,UID:347bf918-aed1-498c-82f6-db970734d48a,ResourceVersion:22948214,Generation:0,CreationTimestamp:2020-02-03 14:06:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb  3 14:06:28.821: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-2597,SelfLink:/api/v1/namespaces/watch-2597/configmaps/e2e-watch-test-watch-closed,UID:347bf918-aed1-498c-82f6-db970734d48a,ResourceVersion:22948215,Generation:0,CreationTimestamp:2020-02-03 14:06:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
Feb  3 14:06:28.838: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-2597,SelfLink:/api/v1/namespaces/watch-2597/configmaps/e2e-watch-test-watch-closed,UID:347bf918-aed1-498c-82f6-db970734d48a,ResourceVersion:22948216,Generation:0,CreationTimestamp:2020-02-03 14:06:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb  3 14:06:28.838: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-2597,SelfLink:/api/v1/namespaces/watch-2597/configmaps/e2e-watch-test-watch-closed,UID:347bf918-aed1-498c-82f6-db970734d48a,ResourceVersion:22948217,Generation:0,CreationTimestamp:2020-02-03 14:06:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 14:06:28.838: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-2597" for this suite.
Feb  3 14:06:34.884: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 14:06:35.033: INFO: namespace watch-2597 deletion completed in 6.190037836s

• [SLOW TEST:6.343 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 14:06:35.033: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 14:06:44.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-7745" for this suite.
Feb  3 14:07:06.278: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 14:07:06.401: INFO: namespace replication-controller-7745 deletion completed in 22.147673188s

• [SLOW TEST:31.368 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 14:07:06.403: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name cm-test-opt-del-95255273-4cdb-47d0-851a-0667f3f8141c
STEP: Creating configMap with name cm-test-opt-upd-70436ec8-d9a8-4f1c-9d83-d8f1393395e2
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-95255273-4cdb-47d0-851a-0667f3f8141c
STEP: Updating configmap cm-test-opt-upd-70436ec8-d9a8-4f1c-9d83-d8f1393395e2
STEP: Creating configMap with name cm-test-opt-create-a06c8104-60ec-4f09-bf79-ba26cc5ac0a2
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 14:08:40.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3052" for this suite.
Feb  3 14:09:02.569: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 14:09:02.713: INFO: namespace configmap-3052 deletion completed in 22.176452146s

• [SLOW TEST:116.311 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 14:09:02.714: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-e88e64fd-cf73-4b62-8beb-28abaae0f283
STEP: Creating a pod to test consume configMaps
Feb  3 14:09:02.964: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-3ad6db6e-04ad-4e9e-a739-b67f2093d89f" in namespace "projected-3429" to be "success or failure"
Feb  3 14:09:02.994: INFO: Pod "pod-projected-configmaps-3ad6db6e-04ad-4e9e-a739-b67f2093d89f": Phase="Pending", Reason="", readiness=false. Elapsed: 30.319753ms
Feb  3 14:09:05.005: INFO: Pod "pod-projected-configmaps-3ad6db6e-04ad-4e9e-a739-b67f2093d89f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040471922s
Feb  3 14:09:07.013: INFO: Pod "pod-projected-configmaps-3ad6db6e-04ad-4e9e-a739-b67f2093d89f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.048491172s
Feb  3 14:09:09.019: INFO: Pod "pod-projected-configmaps-3ad6db6e-04ad-4e9e-a739-b67f2093d89f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.055386123s
Feb  3 14:09:11.026: INFO: Pod "pod-projected-configmaps-3ad6db6e-04ad-4e9e-a739-b67f2093d89f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.062203538s
STEP: Saw pod success
Feb  3 14:09:11.026: INFO: Pod "pod-projected-configmaps-3ad6db6e-04ad-4e9e-a739-b67f2093d89f" satisfied condition "success or failure"
Feb  3 14:09:11.030: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-3ad6db6e-04ad-4e9e-a739-b67f2093d89f container projected-configmap-volume-test: 
STEP: delete the pod
Feb  3 14:09:11.106: INFO: Waiting for pod pod-projected-configmaps-3ad6db6e-04ad-4e9e-a739-b67f2093d89f to disappear
Feb  3 14:09:11.177: INFO: Pod pod-projected-configmaps-3ad6db6e-04ad-4e9e-a739-b67f2093d89f no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 14:09:11.177: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3429" for this suite.
Feb  3 14:09:17.217: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 14:09:17.357: INFO: namespace projected-3429 deletion completed in 6.170315648s

• [SLOW TEST:14.643 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 14:09:17.358: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Feb  3 14:12:19.778: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  3 14:12:19.914: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  3 14:12:21.915: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  3 14:12:21.932: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  3 14:12:23.915: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  3 14:12:23.927: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  3 14:12:25.915: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  3 14:12:25.925: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  3 14:12:27.915: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  3 14:12:27.924: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  3 14:12:29.915: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  3 14:12:29.924: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  3 14:12:31.915: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  3 14:12:31.931: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  3 14:12:33.915: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  3 14:12:33.930: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  3 14:12:35.915: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  3 14:12:35.922: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  3 14:12:37.915: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  3 14:12:37.935: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  3 14:12:39.915: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  3 14:12:39.926: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  3 14:12:41.915: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  3 14:12:41.962: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 14:12:41.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-6654" for this suite.
Feb  3 14:13:06.047: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 14:13:06.176: INFO: namespace container-lifecycle-hook-6654 deletion completed in 24.205539683s

• [SLOW TEST:228.818 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 14:13:06.176: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:179
[It] should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 14:13:06.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-5987" for this suite.
Feb  3 14:13:22.376: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 14:13:22.475: INFO: namespace pods-5987 deletion completed in 16.181222275s

• [SLOW TEST:16.299 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be submitted and removed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 14:13:22.476: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-6467
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a new StatefulSet
Feb  3 14:13:22.663: INFO: Found 0 stateful pods, waiting for 3
Feb  3 14:13:32.723: INFO: Found 2 stateful pods, waiting for 3
Feb  3 14:13:42.681: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb  3 14:13:42.682: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb  3 14:13:42.682: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb  3 14:13:52.682: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb  3 14:13:52.683: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb  3 14:13:52.683: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Feb  3 14:13:52.719: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Feb  3 14:14:02.776: INFO: Updating stateful set ss2
Feb  3 14:14:02.851: INFO: Waiting for Pod statefulset-6467/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  3 14:14:12.891: INFO: Waiting for Pod statefulset-6467/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
STEP: Restoring Pods to the correct revision when they are deleted
Feb  3 14:14:23.546: INFO: Found 2 stateful pods, waiting for 3
Feb  3 14:14:33.556: INFO: Found 2 stateful pods, waiting for 3
Feb  3 14:14:43.556: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb  3 14:14:43.556: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb  3 14:14:43.556: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Feb  3 14:14:43.590: INFO: Updating stateful set ss2
Feb  3 14:14:43.636: INFO: Waiting for Pod statefulset-6467/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  3 14:14:53.654: INFO: Waiting for Pod statefulset-6467/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  3 14:15:03.721: INFO: Updating stateful set ss2
Feb  3 14:15:03.994: INFO: Waiting for StatefulSet statefulset-6467/ss2 to complete update
Feb  3 14:15:03.995: INFO: Waiting for Pod statefulset-6467/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  3 14:15:14.033: INFO: Waiting for StatefulSet statefulset-6467/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Feb  3 14:15:24.021: INFO: Deleting all statefulset in ns statefulset-6467
Feb  3 14:15:24.043: INFO: Scaling statefulset ss2 to 0
Feb  3 14:15:54.094: INFO: Waiting for statefulset status.replicas updated to 0
Feb  3 14:15:54.097: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 14:15:54.111: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-6467" for this suite.
Feb  3 14:16:02.144: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 14:16:02.315: INFO: namespace statefulset-6467 deletion completed in 8.200479324s

• [SLOW TEST:159.840 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 14:16:02.316: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod busybox-786f619d-1152-432a-a3d2-500dbc2d6f6d in namespace container-probe-8711
Feb  3 14:16:10.535: INFO: Started pod busybox-786f619d-1152-432a-a3d2-500dbc2d6f6d in namespace container-probe-8711
STEP: checking the pod's current state and verifying that restartCount is present
Feb  3 14:16:10.543: INFO: Initial restart count of pod busybox-786f619d-1152-432a-a3d2-500dbc2d6f6d is 0
Feb  3 14:17:01.481: INFO: Restart count of pod container-probe-8711/busybox-786f619d-1152-432a-a3d2-500dbc2d6f6d is now 1 (50.938181802s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 14:17:01.512: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-8711" for this suite.
Feb  3 14:17:07.575: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 14:17:07.677: INFO: namespace container-probe-8711 deletion completed in 6.151837179s

• [SLOW TEST:65.361 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 14:17:07.678: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-81777344-cf39-4c2c-ad71-123cf800417b
STEP: Creating a pod to test consume configMaps
Feb  3 14:17:07.828: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-112a4ffc-2e36-4401-a502-d940842097b7" in namespace "projected-3145" to be "success or failure"
Feb  3 14:17:07.900: INFO: Pod "pod-projected-configmaps-112a4ffc-2e36-4401-a502-d940842097b7": Phase="Pending", Reason="", readiness=false. Elapsed: 71.746931ms
Feb  3 14:17:09.914: INFO: Pod "pod-projected-configmaps-112a4ffc-2e36-4401-a502-d940842097b7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.086591128s
Feb  3 14:17:11.930: INFO: Pod "pod-projected-configmaps-112a4ffc-2e36-4401-a502-d940842097b7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.101737117s
Feb  3 14:17:13.945: INFO: Pod "pod-projected-configmaps-112a4ffc-2e36-4401-a502-d940842097b7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.117080344s
Feb  3 14:17:15.953: INFO: Pod "pod-projected-configmaps-112a4ffc-2e36-4401-a502-d940842097b7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.125622251s
STEP: Saw pod success
Feb  3 14:17:15.954: INFO: Pod "pod-projected-configmaps-112a4ffc-2e36-4401-a502-d940842097b7" satisfied condition "success or failure"
Feb  3 14:17:15.958: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-112a4ffc-2e36-4401-a502-d940842097b7 container projected-configmap-volume-test: 
STEP: delete the pod
Feb  3 14:17:16.135: INFO: Waiting for pod pod-projected-configmaps-112a4ffc-2e36-4401-a502-d940842097b7 to disappear
Feb  3 14:17:16.144: INFO: Pod pod-projected-configmaps-112a4ffc-2e36-4401-a502-d940842097b7 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 14:17:16.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3145" for this suite.
Feb  3 14:17:22.172: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 14:17:22.278: INFO: namespace projected-3145 deletion completed in 6.126868439s

• [SLOW TEST:14.600 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 14:17:22.278: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test use defaults
Feb  3 14:17:22.438: INFO: Waiting up to 5m0s for pod "client-containers-6732602b-f991-41c8-8093-c78e07ee32e0" in namespace "containers-8355" to be "success or failure"
Feb  3 14:17:22.475: INFO: Pod "client-containers-6732602b-f991-41c8-8093-c78e07ee32e0": Phase="Pending", Reason="", readiness=false. Elapsed: 36.321335ms
Feb  3 14:17:24.488: INFO: Pod "client-containers-6732602b-f991-41c8-8093-c78e07ee32e0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049269104s
Feb  3 14:17:26.510: INFO: Pod "client-containers-6732602b-f991-41c8-8093-c78e07ee32e0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.071574357s
Feb  3 14:17:28.523: INFO: Pod "client-containers-6732602b-f991-41c8-8093-c78e07ee32e0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.084329034s
Feb  3 14:17:30.540: INFO: Pod "client-containers-6732602b-f991-41c8-8093-c78e07ee32e0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.100856852s
STEP: Saw pod success
Feb  3 14:17:30.540: INFO: Pod "client-containers-6732602b-f991-41c8-8093-c78e07ee32e0" satisfied condition "success or failure"
Feb  3 14:17:30.544: INFO: Trying to get logs from node iruya-node pod client-containers-6732602b-f991-41c8-8093-c78e07ee32e0 container test-container: 
STEP: delete the pod
Feb  3 14:17:30.621: INFO: Waiting for pod client-containers-6732602b-f991-41c8-8093-c78e07ee32e0 to disappear
Feb  3 14:17:30.632: INFO: Pod client-containers-6732602b-f991-41c8-8093-c78e07ee32e0 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 14:17:30.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-8355" for this suite.
Feb  3 14:17:36.703: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 14:17:36.885: INFO: namespace containers-8355 deletion completed in 6.244672202s

• [SLOW TEST:14.606 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 14:17:36.885: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Feb  3 14:17:37.043: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-1065,SelfLink:/api/v1/namespaces/watch-1065/configmaps/e2e-watch-test-label-changed,UID:44a09254-cb88-4200-aa4d-13c3b114130d,ResourceVersion:22949614,Generation:0,CreationTimestamp:2020-02-03 14:17:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb  3 14:17:37.043: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-1065,SelfLink:/api/v1/namespaces/watch-1065/configmaps/e2e-watch-test-label-changed,UID:44a09254-cb88-4200-aa4d-13c3b114130d,ResourceVersion:22949615,Generation:0,CreationTimestamp:2020-02-03 14:17:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Feb  3 14:17:37.043: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-1065,SelfLink:/api/v1/namespaces/watch-1065/configmaps/e2e-watch-test-label-changed,UID:44a09254-cb88-4200-aa4d-13c3b114130d,ResourceVersion:22949616,Generation:0,CreationTimestamp:2020-02-03 14:17:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Feb  3 14:17:47.157: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-1065,SelfLink:/api/v1/namespaces/watch-1065/configmaps/e2e-watch-test-label-changed,UID:44a09254-cb88-4200-aa4d-13c3b114130d,ResourceVersion:22949632,Generation:0,CreationTimestamp:2020-02-03 14:17:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb  3 14:17:47.158: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-1065,SelfLink:/api/v1/namespaces/watch-1065/configmaps/e2e-watch-test-label-changed,UID:44a09254-cb88-4200-aa4d-13c3b114130d,ResourceVersion:22949633,Generation:0,CreationTimestamp:2020-02-03 14:17:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
Feb  3 14:17:47.158: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-1065,SelfLink:/api/v1/namespaces/watch-1065/configmaps/e2e-watch-test-label-changed,UID:44a09254-cb88-4200-aa4d-13c3b114130d,ResourceVersion:22949634,Generation:0,CreationTimestamp:2020-02-03 14:17:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 14:17:47.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-1065" for this suite.
Feb  3 14:17:53.227: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 14:17:53.331: INFO: namespace watch-1065 deletion completed in 6.140093831s

• [SLOW TEST:16.446 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 14:17:53.331: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on tmpfs
Feb  3 14:17:53.448: INFO: Waiting up to 5m0s for pod "pod-86b15ec9-a6e0-4683-9b1c-83e9d4e736d6" in namespace "emptydir-6131" to be "success or failure"
Feb  3 14:17:53.457: INFO: Pod "pod-86b15ec9-a6e0-4683-9b1c-83e9d4e736d6": Phase="Pending", Reason="", readiness=false. Elapsed: 7.984788ms
Feb  3 14:17:55.469: INFO: Pod "pod-86b15ec9-a6e0-4683-9b1c-83e9d4e736d6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020828466s
Feb  3 14:17:57.478: INFO: Pod "pod-86b15ec9-a6e0-4683-9b1c-83e9d4e736d6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029578446s
Feb  3 14:17:59.485: INFO: Pod "pod-86b15ec9-a6e0-4683-9b1c-83e9d4e736d6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.03687561s
Feb  3 14:18:01.495: INFO: Pod "pod-86b15ec9-a6e0-4683-9b1c-83e9d4e736d6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.046531179s
STEP: Saw pod success
Feb  3 14:18:01.495: INFO: Pod "pod-86b15ec9-a6e0-4683-9b1c-83e9d4e736d6" satisfied condition "success or failure"
Feb  3 14:18:01.506: INFO: Trying to get logs from node iruya-node pod pod-86b15ec9-a6e0-4683-9b1c-83e9d4e736d6 container test-container: 
STEP: delete the pod
Feb  3 14:18:01.557: INFO: Waiting for pod pod-86b15ec9-a6e0-4683-9b1c-83e9d4e736d6 to disappear
Feb  3 14:18:01.582: INFO: Pod pod-86b15ec9-a6e0-4683-9b1c-83e9d4e736d6 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 14:18:01.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6131" for this suite.
Feb  3 14:18:07.614: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 14:18:07.758: INFO: namespace emptydir-6131 deletion completed in 6.167452947s

• [SLOW TEST:14.427 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 14:18:07.759: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Feb  3 14:18:14.276: INFO: 0 pods remaining
Feb  3 14:18:14.276: INFO: 0 pods has nil DeletionTimestamp
Feb  3 14:18:14.276: INFO: 
STEP: Gathering metrics
W0203 14:18:15.276369       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb  3 14:18:15.276: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 14:18:15.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-5414" for this suite.
Feb  3 14:18:23.385: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 14:18:23.550: INFO: namespace gc-5414 deletion completed in 8.266534554s

• [SLOW TEST:15.791 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 14:18:23.550: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb  3 14:18:23.639: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 14:18:24.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-7230" for this suite.
Feb  3 14:18:30.957: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 14:18:31.106: INFO: namespace custom-resource-definition-7230 deletion completed in 6.199091831s

• [SLOW TEST:7.556 seconds]
[sig-api-machinery] CustomResourceDefinition resources
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Simple CustomResourceDefinition
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35
    creating/deleting custom resource definition objects works  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 14:18:31.107: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-05f71d47-518e-440d-99f6-53371262fed8
STEP: Creating a pod to test consume configMaps
Feb  3 14:18:31.277: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-a455d0e2-e301-4638-86ed-45e732908d61" in namespace "projected-3448" to be "success or failure"
Feb  3 14:18:31.284: INFO: Pod "pod-projected-configmaps-a455d0e2-e301-4638-86ed-45e732908d61": Phase="Pending", Reason="", readiness=false. Elapsed: 7.09193ms
Feb  3 14:18:33.291: INFO: Pod "pod-projected-configmaps-a455d0e2-e301-4638-86ed-45e732908d61": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014260159s
Feb  3 14:18:35.345: INFO: Pod "pod-projected-configmaps-a455d0e2-e301-4638-86ed-45e732908d61": Phase="Pending", Reason="", readiness=false. Elapsed: 4.068185722s
Feb  3 14:18:37.353: INFO: Pod "pod-projected-configmaps-a455d0e2-e301-4638-86ed-45e732908d61": Phase="Pending", Reason="", readiness=false. Elapsed: 6.076185982s
Feb  3 14:18:39.362: INFO: Pod "pod-projected-configmaps-a455d0e2-e301-4638-86ed-45e732908d61": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.084985222s
STEP: Saw pod success
Feb  3 14:18:39.362: INFO: Pod "pod-projected-configmaps-a455d0e2-e301-4638-86ed-45e732908d61" satisfied condition "success or failure"
Feb  3 14:18:39.368: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-a455d0e2-e301-4638-86ed-45e732908d61 container projected-configmap-volume-test: 
STEP: delete the pod
Feb  3 14:18:39.485: INFO: Waiting for pod pod-projected-configmaps-a455d0e2-e301-4638-86ed-45e732908d61 to disappear
Feb  3 14:18:39.491: INFO: Pod pod-projected-configmaps-a455d0e2-e301-4638-86ed-45e732908d61 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 14:18:39.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3448" for this suite.
Feb  3 14:18:45.588: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 14:18:45.721: INFO: namespace projected-3448 deletion completed in 6.222981135s

• [SLOW TEST:14.615 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 14:18:45.722: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb  3 14:18:45.791: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Feb  3 14:18:45.803: INFO: Pod name sample-pod: Found 0 pods out of 1
Feb  3 14:18:50.813: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Feb  3 14:18:52.832: INFO: Creating deployment "test-rolling-update-deployment"
Feb  3 14:18:52.840: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Feb  3 14:18:52.869: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Feb  3 14:18:54.901: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Feb  3 14:18:54.905: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716336332, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716336332, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716336333, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716336332, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  3 14:18:56.916: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716336332, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716336332, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716336333, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716336332, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  3 14:18:58.918: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716336332, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716336332, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716336333, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716336332, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  3 14:19:00.959: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Feb  3 14:19:00.986: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:deployment-7551,SelfLink:/apis/apps/v1/namespaces/deployment-7551/deployments/test-rolling-update-deployment,UID:4d7f583e-bf03-4c1c-b24e-7a4d2c718b13,ResourceVersion:22949952,Generation:1,CreationTimestamp:2020-02-03 14:18:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-02-03 14:18:52 +0000 UTC 2020-02-03 14:18:52 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-02-03 14:18:59 +0000 UTC 2020-02-03 14:18:52 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-79f6b9d75c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Feb  3 14:19:00.990: INFO: New ReplicaSet "test-rolling-update-deployment-79f6b9d75c" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c,GenerateName:,Namespace:deployment-7551,SelfLink:/apis/apps/v1/namespaces/deployment-7551/replicasets/test-rolling-update-deployment-79f6b9d75c,UID:510d3d0b-9283-4dab-97ff-5a65d588b3ef,ResourceVersion:22949941,Generation:1,CreationTimestamp:2020-02-03 14:18:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 4d7f583e-bf03-4c1c-b24e-7a4d2c718b13 0xc002eaa5f7 0xc002eaa5f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Feb  3 14:19:00.990: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Feb  3 14:19:00.991: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:deployment-7551,SelfLink:/apis/apps/v1/namespaces/deployment-7551/replicasets/test-rolling-update-controller,UID:1b337bd6-b669-4455-8d7f-e56a3e1ba230,ResourceVersion:22949951,Generation:2,CreationTimestamp:2020-02-03 14:18:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 4d7f583e-bf03-4c1c-b24e-7a4d2c718b13 0xc002eaa50f 0xc002eaa520}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb  3 14:19:00.995: INFO: Pod "test-rolling-update-deployment-79f6b9d75c-xlk57" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c-xlk57,GenerateName:test-rolling-update-deployment-79f6b9d75c-,Namespace:deployment-7551,SelfLink:/api/v1/namespaces/deployment-7551/pods/test-rolling-update-deployment-79f6b9d75c-xlk57,UID:82a17db4-dfc5-48c8-8ef0-e634474555dc,ResourceVersion:22949940,Generation:0,CreationTimestamp:2020-02-03 14:18:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-79f6b9d75c 510d3d0b-9283-4dab-97ff-5a65d588b3ef 0xc0022d6177 0xc0022d6178}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vmsv4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vmsv4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-vmsv4 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0022d61f0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0022d6210}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 14:18:53 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 14:18:59 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 14:18:59 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 14:18:52 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2020-02-03 14:18:53 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-02-03 14:18:58 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://435caf92a36e903d5cca9d84514277f203f32ad43932a9fcec770482e1b647f4}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 14:19:00.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-7551" for this suite.
Feb  3 14:19:07.022: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 14:19:07.111: INFO: namespace deployment-7551 deletion completed in 6.111128561s

• [SLOW TEST:21.389 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 14:19:07.111: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Feb  3 14:19:07.268: INFO: Waiting up to 5m0s for pod "downward-api-429fd270-008a-47bb-b80a-476d0070dbb8" in namespace "downward-api-484" to be "success or failure"
Feb  3 14:19:07.278: INFO: Pod "downward-api-429fd270-008a-47bb-b80a-476d0070dbb8": Phase="Pending", Reason="", readiness=false. Elapsed: 9.926486ms
Feb  3 14:19:09.290: INFO: Pod "downward-api-429fd270-008a-47bb-b80a-476d0070dbb8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02223384s
Feb  3 14:19:11.300: INFO: Pod "downward-api-429fd270-008a-47bb-b80a-476d0070dbb8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032061832s
Feb  3 14:19:13.315: INFO: Pod "downward-api-429fd270-008a-47bb-b80a-476d0070dbb8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.046632445s
Feb  3 14:19:15.323: INFO: Pod "downward-api-429fd270-008a-47bb-b80a-476d0070dbb8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.054936458s
STEP: Saw pod success
Feb  3 14:19:15.323: INFO: Pod "downward-api-429fd270-008a-47bb-b80a-476d0070dbb8" satisfied condition "success or failure"
Feb  3 14:19:15.327: INFO: Trying to get logs from node iruya-node pod downward-api-429fd270-008a-47bb-b80a-476d0070dbb8 container dapi-container: 
STEP: delete the pod
Feb  3 14:19:15.423: INFO: Waiting for pod downward-api-429fd270-008a-47bb-b80a-476d0070dbb8 to disappear
Feb  3 14:19:15.436: INFO: Pod downward-api-429fd270-008a-47bb-b80a-476d0070dbb8 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 14:19:15.436: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-484" for this suite.
Feb  3 14:19:21.462: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 14:19:21.618: INFO: namespace downward-api-484 deletion completed in 6.176427155s

• [SLOW TEST:14.507 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 14:19:21.618: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb  3 14:19:21.828: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a8531236-da2e-4061-a0ec-ec315d1eceff" in namespace "downward-api-3223" to be "success or failure"
Feb  3 14:19:21.851: INFO: Pod "downwardapi-volume-a8531236-da2e-4061-a0ec-ec315d1eceff": Phase="Pending", Reason="", readiness=false. Elapsed: 22.048092ms
Feb  3 14:19:23.923: INFO: Pod "downwardapi-volume-a8531236-da2e-4061-a0ec-ec315d1eceff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.094942911s
Feb  3 14:19:25.932: INFO: Pod "downwardapi-volume-a8531236-da2e-4061-a0ec-ec315d1eceff": Phase="Pending", Reason="", readiness=false. Elapsed: 4.103866039s
Feb  3 14:19:27.967: INFO: Pod "downwardapi-volume-a8531236-da2e-4061-a0ec-ec315d1eceff": Phase="Pending", Reason="", readiness=false. Elapsed: 6.138202669s
Feb  3 14:19:29.976: INFO: Pod "downwardapi-volume-a8531236-da2e-4061-a0ec-ec315d1eceff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.14725457s
STEP: Saw pod success
Feb  3 14:19:29.976: INFO: Pod "downwardapi-volume-a8531236-da2e-4061-a0ec-ec315d1eceff" satisfied condition "success or failure"
Feb  3 14:19:29.981: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-a8531236-da2e-4061-a0ec-ec315d1eceff container client-container: 
STEP: delete the pod
Feb  3 14:19:30.033: INFO: Waiting for pod downwardapi-volume-a8531236-da2e-4061-a0ec-ec315d1eceff to disappear
Feb  3 14:19:30.040: INFO: Pod downwardapi-volume-a8531236-da2e-4061-a0ec-ec315d1eceff no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 14:19:30.040: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3223" for this suite.
Feb  3 14:19:36.121: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 14:19:36.259: INFO: namespace downward-api-3223 deletion completed in 6.214399653s

• [SLOW TEST:14.641 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 14:19:36.260: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-7726
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace statefulset-7726
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-7726
Feb  3 14:19:36.432: INFO: Found 0 stateful pods, waiting for 1
Feb  3 14:19:46.456: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Feb  3 14:19:46.462: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7726 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb  3 14:19:49.075: INFO: stderr: "I0203 14:19:48.749421    2031 log.go:172] (0xc0009784d0) (0xc000823720) Create stream\nI0203 14:19:48.749494    2031 log.go:172] (0xc0009784d0) (0xc000823720) Stream added, broadcasting: 1\nI0203 14:19:48.758647    2031 log.go:172] (0xc0009784d0) Reply frame received for 1\nI0203 14:19:48.758760    2031 log.go:172] (0xc0009784d0) (0xc000603040) Create stream\nI0203 14:19:48.758834    2031 log.go:172] (0xc0009784d0) (0xc000603040) Stream added, broadcasting: 3\nI0203 14:19:48.761718    2031 log.go:172] (0xc0009784d0) Reply frame received for 3\nI0203 14:19:48.761764    2031 log.go:172] (0xc0009784d0) (0xc000606be0) Create stream\nI0203 14:19:48.761777    2031 log.go:172] (0xc0009784d0) (0xc000606be0) Stream added, broadcasting: 5\nI0203 14:19:48.764592    2031 log.go:172] (0xc0009784d0) Reply frame received for 5\nI0203 14:19:48.904067    2031 log.go:172] (0xc0009784d0) Data frame received for 5\nI0203 14:19:48.904152    2031 log.go:172] (0xc000606be0) (5) Data frame handling\nI0203 14:19:48.904179    2031 log.go:172] (0xc000606be0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0203 14:19:48.941879    2031 log.go:172] (0xc0009784d0) Data frame received for 3\nI0203 14:19:48.941904    2031 log.go:172] (0xc000603040) (3) Data frame handling\nI0203 14:19:48.941915    2031 log.go:172] (0xc000603040) (3) Data frame sent\nI0203 14:19:49.061484    2031 log.go:172] (0xc0009784d0) (0xc000603040) Stream removed, broadcasting: 3\nI0203 14:19:49.061679    2031 log.go:172] (0xc0009784d0) Data frame received for 1\nI0203 14:19:49.061712    2031 log.go:172] (0xc000823720) (1) Data frame handling\nI0203 14:19:49.061733    2031 log.go:172] (0xc000823720) (1) Data frame sent\nI0203 14:19:49.061746    2031 log.go:172] (0xc0009784d0) (0xc000823720) Stream removed, broadcasting: 1\nI0203 14:19:49.062262    2031 log.go:172] (0xc0009784d0) (0xc000606be0) Stream removed, broadcasting: 5\nI0203 14:19:49.062363    2031 log.go:172] (0xc0009784d0) (0xc000823720) Stream removed, broadcasting: 1\nI0203 14:19:49.062448    2031 log.go:172] (0xc0009784d0) (0xc000603040) Stream removed, broadcasting: 3\nI0203 14:19:49.062481    2031 log.go:172] (0xc0009784d0) (0xc000606be0) Stream removed, broadcasting: 5\nI0203 14:19:49.062522    2031 log.go:172] (0xc0009784d0) Go away received\n"
Feb  3 14:19:49.075: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb  3 14:19:49.075: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb  3 14:19:49.085: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Feb  3 14:19:59.095: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb  3 14:19:59.095: INFO: Waiting for statefulset status.replicas updated to 0
Feb  3 14:19:59.121: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999439s
Feb  3 14:20:00.132: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.99033593s
Feb  3 14:20:01.144: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.979621269s
Feb  3 14:20:02.164: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.966850701s
Feb  3 14:20:03.174: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.947503035s
Feb  3 14:20:04.180: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.937406722s
Feb  3 14:20:05.191: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.931548907s
Feb  3 14:20:06.203: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.920455342s
Feb  3 14:20:07.215: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.907789116s
Feb  3 14:20:08.229: INFO: Verifying statefulset ss doesn't scale past 1 for another 895.65275ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-7726
Feb  3 14:20:09.280: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7726 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  3 14:20:09.816: INFO: stderr: "I0203 14:20:09.536833    2052 log.go:172] (0xc000964420) (0xc00028a820) Create stream\nI0203 14:20:09.536949    2052 log.go:172] (0xc000964420) (0xc00028a820) Stream added, broadcasting: 1\nI0203 14:20:09.545358    2052 log.go:172] (0xc000964420) Reply frame received for 1\nI0203 14:20:09.545421    2052 log.go:172] (0xc000964420) (0xc00097c000) Create stream\nI0203 14:20:09.545452    2052 log.go:172] (0xc000964420) (0xc00097c000) Stream added, broadcasting: 3\nI0203 14:20:09.547402    2052 log.go:172] (0xc000964420) Reply frame received for 3\nI0203 14:20:09.547426    2052 log.go:172] (0xc000964420) (0xc00028a8c0) Create stream\nI0203 14:20:09.547432    2052 log.go:172] (0xc000964420) (0xc00028a8c0) Stream added, broadcasting: 5\nI0203 14:20:09.548935    2052 log.go:172] (0xc000964420) Reply frame received for 5\nI0203 14:20:09.669507    2052 log.go:172] (0xc000964420) Data frame received for 3\nI0203 14:20:09.669602    2052 log.go:172] (0xc00097c000) (3) Data frame handling\nI0203 14:20:09.669619    2052 log.go:172] (0xc00097c000) (3) Data frame sent\nI0203 14:20:09.669653    2052 log.go:172] (0xc000964420) Data frame received for 5\nI0203 14:20:09.669666    2052 log.go:172] (0xc00028a8c0) (5) Data frame handling\nI0203 14:20:09.669683    2052 log.go:172] (0xc00028a8c0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0203 14:20:09.798081    2052 log.go:172] (0xc000964420) Data frame received for 1\nI0203 14:20:09.798753    2052 log.go:172] (0xc000964420) (0xc00097c000) Stream removed, broadcasting: 3\nI0203 14:20:09.798942    2052 log.go:172] (0xc00028a820) (1) Data frame handling\nI0203 14:20:09.799045    2052 log.go:172] (0xc00028a820) (1) Data frame sent\nI0203 14:20:09.799131    2052 log.go:172] (0xc000964420) (0xc00028a820) Stream removed, broadcasting: 1\nI0203 14:20:09.799271    2052 log.go:172] (0xc000964420) (0xc00028a8c0) Stream removed, broadcasting: 5\nI0203 14:20:09.799358    2052 log.go:172] (0xc000964420) Go away received\nI0203 14:20:09.800608    2052 log.go:172] (0xc000964420) (0xc00028a820) Stream removed, broadcasting: 1\nI0203 14:20:09.800646    2052 log.go:172] (0xc000964420) (0xc00097c000) Stream removed, broadcasting: 3\nI0203 14:20:09.800668    2052 log.go:172] (0xc000964420) (0xc00028a8c0) Stream removed, broadcasting: 5\n"
Feb  3 14:20:09.816: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb  3 14:20:09.816: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb  3 14:20:09.826: INFO: Found 1 stateful pods, waiting for 3
Feb  3 14:20:19.839: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Feb  3 14:20:19.839: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Feb  3 14:20:19.839: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb  3 14:20:29.836: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Feb  3 14:20:29.836: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Feb  3 14:20:29.836: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Feb  3 14:20:29.845: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7726 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb  3 14:20:30.744: INFO: stderr: "I0203 14:20:30.216930    2074 log.go:172] (0xc0008f4580) (0xc00059aaa0) Create stream\nI0203 14:20:30.217064    2074 log.go:172] (0xc0008f4580) (0xc00059aaa0) Stream added, broadcasting: 1\nI0203 14:20:30.226650    2074 log.go:172] (0xc0008f4580) Reply frame received for 1\nI0203 14:20:30.226873    2074 log.go:172] (0xc0008f4580) (0xc000898000) Create stream\nI0203 14:20:30.226919    2074 log.go:172] (0xc0008f4580) (0xc000898000) Stream added, broadcasting: 3\nI0203 14:20:30.229288    2074 log.go:172] (0xc0008f4580) Reply frame received for 3\nI0203 14:20:30.229321    2074 log.go:172] (0xc0008f4580) (0xc0008980a0) Create stream\nI0203 14:20:30.229330    2074 log.go:172] (0xc0008f4580) (0xc0008980a0) Stream added, broadcasting: 5\nI0203 14:20:30.231455    2074 log.go:172] (0xc0008f4580) Reply frame received for 5\nI0203 14:20:30.427960    2074 log.go:172] (0xc0008f4580) Data frame received for 3\nI0203 14:20:30.428110    2074 log.go:172] (0xc000898000) (3) Data frame handling\nI0203 14:20:30.428155    2074 log.go:172] (0xc000898000) (3) Data frame sent\nI0203 14:20:30.428605    2074 log.go:172] (0xc0008f4580) Data frame received for 5\nI0203 14:20:30.428673    2074 log.go:172] (0xc0008980a0) (5) Data frame handling\nI0203 14:20:30.428763    2074 log.go:172] (0xc0008980a0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0203 14:20:30.717361    2074 log.go:172] (0xc0008f4580) (0xc000898000) Stream removed, broadcasting: 3\nI0203 14:20:30.717605    2074 log.go:172] (0xc0008f4580) Data frame received for 1\nI0203 14:20:30.717751    2074 log.go:172] (0xc0008f4580) (0xc0008980a0) Stream removed, broadcasting: 5\nI0203 14:20:30.717851    2074 log.go:172] (0xc00059aaa0) (1) Data frame handling\nI0203 14:20:30.717884    2074 log.go:172] (0xc00059aaa0) (1) Data frame sent\nI0203 14:20:30.717916    2074 log.go:172] (0xc0008f4580) (0xc00059aaa0) Stream removed, broadcasting: 1\nI0203 14:20:30.717977    2074 log.go:172] (0xc0008f4580) Go away received\nI0203 14:20:30.720022    2074 log.go:172] (0xc0008f4580) (0xc00059aaa0) Stream removed, broadcasting: 1\nI0203 14:20:30.720078    2074 log.go:172] (0xc0008f4580) (0xc000898000) Stream removed, broadcasting: 3\nI0203 14:20:30.720090    2074 log.go:172] (0xc0008f4580) (0xc0008980a0) Stream removed, broadcasting: 5\n"
Feb  3 14:20:30.745: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb  3 14:20:30.745: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb  3 14:20:30.745: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7726 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb  3 14:20:31.342: INFO: stderr: "I0203 14:20:30.921939    2095 log.go:172] (0xc000104d10) (0xc0005f28c0) Create stream\nI0203 14:20:30.922026    2095 log.go:172] (0xc000104d10) (0xc0005f28c0) Stream added, broadcasting: 1\nI0203 14:20:30.929195    2095 log.go:172] (0xc000104d10) Reply frame received for 1\nI0203 14:20:30.929321    2095 log.go:172] (0xc000104d10) (0xc00020c000) Create stream\nI0203 14:20:30.929337    2095 log.go:172] (0xc000104d10) (0xc00020c000) Stream added, broadcasting: 3\nI0203 14:20:30.934806    2095 log.go:172] (0xc000104d10) Reply frame received for 3\nI0203 14:20:30.934878    2095 log.go:172] (0xc000104d10) (0xc00020c0a0) Create stream\nI0203 14:20:30.934897    2095 log.go:172] (0xc000104d10) (0xc00020c0a0) Stream added, broadcasting: 5\nI0203 14:20:30.936157    2095 log.go:172] (0xc000104d10) Reply frame received for 5\nI0203 14:20:31.169230    2095 log.go:172] (0xc000104d10) Data frame received for 5\nI0203 14:20:31.169259    2095 log.go:172] (0xc00020c0a0) (5) Data frame handling\nI0203 14:20:31.169273    2095 log.go:172] (0xc00020c0a0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0203 14:20:31.260863    2095 log.go:172] (0xc000104d10) Data frame received for 3\nI0203 14:20:31.260909    2095 log.go:172] (0xc00020c000) (3) Data frame handling\nI0203 14:20:31.260922    2095 log.go:172] (0xc00020c000) (3) Data frame sent\nI0203 14:20:31.334292    2095 log.go:172] (0xc000104d10) (0xc00020c000) Stream removed, broadcasting: 3\nI0203 14:20:31.334413    2095 log.go:172] (0xc000104d10) Data frame received for 1\nI0203 14:20:31.334431    2095 log.go:172] (0xc0005f28c0) (1) Data frame handling\nI0203 14:20:31.334439    2095 log.go:172] (0xc0005f28c0) (1) Data frame sent\nI0203 14:20:31.334445    2095 log.go:172] (0xc000104d10) (0xc0005f28c0) Stream removed, broadcasting: 1\nI0203 14:20:31.334799    2095 log.go:172] (0xc000104d10) (0xc00020c0a0) Stream removed, broadcasting: 5\nI0203 14:20:31.334829    2095 log.go:172] (0xc000104d10) (0xc0005f28c0) Stream removed, broadcasting: 1\nI0203 14:20:31.334840    2095 log.go:172] (0xc000104d10) (0xc00020c000) Stream removed, broadcasting: 3\nI0203 14:20:31.334846    2095 log.go:172] (0xc000104d10) (0xc00020c0a0) Stream removed, broadcasting: 5\nI0203 14:20:31.334924    2095 log.go:172] (0xc000104d10) Go away received\n"
Feb  3 14:20:31.342: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb  3 14:20:31.342: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb  3 14:20:31.342: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7726 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb  3 14:20:31.877: INFO: stderr: "I0203 14:20:31.541943    2110 log.go:172] (0xc00090a420) (0xc0009226e0) Create stream\nI0203 14:20:31.542013    2110 log.go:172] (0xc00090a420) (0xc0009226e0) Stream added, broadcasting: 1\nI0203 14:20:31.548333    2110 log.go:172] (0xc00090a420) Reply frame received for 1\nI0203 14:20:31.548453    2110 log.go:172] (0xc00090a420) (0xc0006ee3c0) Create stream\nI0203 14:20:31.548480    2110 log.go:172] (0xc00090a420) (0xc0006ee3c0) Stream added, broadcasting: 3\nI0203 14:20:31.550571    2110 log.go:172] (0xc00090a420) Reply frame received for 3\nI0203 14:20:31.550659    2110 log.go:172] (0xc00090a420) (0xc00074e000) Create stream\nI0203 14:20:31.550670    2110 log.go:172] (0xc00090a420) (0xc00074e000) Stream added, broadcasting: 5\nI0203 14:20:31.552365    2110 log.go:172] (0xc00090a420) Reply frame received for 5\nI0203 14:20:31.657366    2110 log.go:172] (0xc00090a420) Data frame received for 5\nI0203 14:20:31.657464    2110 log.go:172] (0xc00074e000) (5) Data frame handling\nI0203 14:20:31.657501    2110 log.go:172] (0xc00074e000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0203 14:20:31.694514    2110 log.go:172] (0xc00090a420) Data frame received for 3\nI0203 14:20:31.694614    2110 log.go:172] (0xc0006ee3c0) (3) Data frame handling\nI0203 14:20:31.694634    2110 log.go:172] (0xc0006ee3c0) (3) Data frame sent\nI0203 14:20:31.855728    2110 log.go:172] (0xc00090a420) Data frame received for 1\nI0203 14:20:31.855888    2110 log.go:172] (0xc00090a420) (0xc0006ee3c0) Stream removed, broadcasting: 3\nI0203 14:20:31.855952    2110 log.go:172] (0xc0009226e0) (1) Data frame handling\nI0203 14:20:31.856024    2110 log.go:172] (0xc0009226e0) (1) Data frame sent\nI0203 14:20:31.856038    2110 log.go:172] (0xc00090a420) (0xc00074e000) Stream removed, broadcasting: 5\nI0203 14:20:31.856112    2110 log.go:172] (0xc00090a420) (0xc0009226e0) Stream removed, broadcasting: 1\nI0203 14:20:31.856136    2110 log.go:172] (0xc00090a420) Go away received\nI0203 14:20:31.857657    2110 log.go:172] (0xc00090a420) (0xc0009226e0) Stream removed, broadcasting: 1\nI0203 14:20:31.858108    2110 log.go:172] (0xc00090a420) (0xc0006ee3c0) Stream removed, broadcasting: 3\nI0203 14:20:31.858411    2110 log.go:172] (0xc00090a420) (0xc00074e000) Stream removed, broadcasting: 5\n"
Feb  3 14:20:31.877: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb  3 14:20:31.877: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb  3 14:20:31.877: INFO: Waiting for statefulset status.replicas updated to 0
Feb  3 14:20:31.895: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1
Feb  3 14:20:41.916: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb  3 14:20:41.917: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Feb  3 14:20:41.917: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Feb  3 14:20:41.950: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999997372s
Feb  3 14:20:42.961: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.994022191s
Feb  3 14:20:43.971: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.98345644s
Feb  3 14:20:44.977: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.973509021s
Feb  3 14:20:45.986: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.966767184s
Feb  3 14:20:46.999: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.958604432s
Feb  3 14:20:48.009: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.945423973s
Feb  3 14:20:49.019: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.935245129s
Feb  3 14:20:50.028: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.925166506s
Feb  3 14:20:51.039: INFO: Verifying statefulset ss doesn't scale past 3 for another 915.835596ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-7726
Feb  3 14:20:52.046: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7726 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  3 14:20:52.675: INFO: stderr: "I0203 14:20:52.271613    2131 log.go:172] (0xc000118790) (0xc0003d46e0) Create stream\nI0203 14:20:52.271831    2131 log.go:172] (0xc000118790) (0xc0003d46e0) Stream added, broadcasting: 1\nI0203 14:20:52.281303    2131 log.go:172] (0xc000118790) Reply frame received for 1\nI0203 14:20:52.281412    2131 log.go:172] (0xc000118790) (0xc00081a000) Create stream\nI0203 14:20:52.281463    2131 log.go:172] (0xc000118790) (0xc00081a000) Stream added, broadcasting: 3\nI0203 14:20:52.283789    2131 log.go:172] (0xc000118790) Reply frame received for 3\nI0203 14:20:52.283877    2131 log.go:172] (0xc000118790) (0xc0006543c0) Create stream\nI0203 14:20:52.283890    2131 log.go:172] (0xc000118790) (0xc0006543c0) Stream added, broadcasting: 5\nI0203 14:20:52.285611    2131 log.go:172] (0xc000118790) Reply frame received for 5\nI0203 14:20:52.426687    2131 log.go:172] (0xc000118790) Data frame received for 3\nI0203 14:20:52.426849    2131 log.go:172] (0xc00081a000) (3) Data frame handling\nI0203 14:20:52.426901    2131 log.go:172] (0xc00081a000) (3) Data frame sent\nI0203 14:20:52.426973    2131 log.go:172] (0xc000118790) Data frame received for 5\nI0203 14:20:52.426993    2131 log.go:172] (0xc0006543c0) (5) Data frame handling\nI0203 14:20:52.427014    2131 log.go:172] (0xc0006543c0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0203 14:20:52.658210    2131 log.go:172] (0xc000118790) (0xc00081a000) Stream removed, broadcasting: 3\nI0203 14:20:52.658367    2131 log.go:172] (0xc000118790) Data frame received for 1\nI0203 14:20:52.658661    2131 log.go:172] (0xc000118790) (0xc0006543c0) Stream removed, broadcasting: 5\nI0203 14:20:52.658865    2131 log.go:172] (0xc0003d46e0) (1) Data frame handling\nI0203 14:20:52.658904    2131 log.go:172] (0xc0003d46e0) (1) Data frame sent\nI0203 14:20:52.658916    2131 log.go:172] (0xc000118790) (0xc0003d46e0) Stream removed, broadcasting: 1\nI0203 14:20:52.658933    2131 log.go:172] (0xc000118790) Go away received\nI0203 14:20:52.660434    2131 log.go:172] (0xc000118790) (0xc0003d46e0) Stream removed, broadcasting: 1\nI0203 14:20:52.660450    2131 log.go:172] (0xc000118790) (0xc00081a000) Stream removed, broadcasting: 3\nI0203 14:20:52.660465    2131 log.go:172] (0xc000118790) (0xc0006543c0) Stream removed, broadcasting: 5\n"
Feb  3 14:20:52.676: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb  3 14:20:52.676: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb  3 14:20:52.676: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7726 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  3 14:20:53.154: INFO: stderr: "I0203 14:20:52.883411    2152 log.go:172] (0xc00091e370) (0xc0008ca640) Create stream\nI0203 14:20:52.883457    2152 log.go:172] (0xc00091e370) (0xc0008ca640) Stream added, broadcasting: 1\nI0203 14:20:52.886109    2152 log.go:172] (0xc00091e370) Reply frame received for 1\nI0203 14:20:52.886152    2152 log.go:172] (0xc00091e370) (0xc0003ff360) Create stream\nI0203 14:20:52.886166    2152 log.go:172] (0xc00091e370) (0xc0003ff360) Stream added, broadcasting: 3\nI0203 14:20:52.887426    2152 log.go:172] (0xc00091e370) Reply frame received for 3\nI0203 14:20:52.887503    2152 log.go:172] (0xc00091e370) (0xc0002e0fa0) Create stream\nI0203 14:20:52.887520    2152 log.go:172] (0xc00091e370) (0xc0002e0fa0) Stream added, broadcasting: 5\nI0203 14:20:52.889297    2152 log.go:172] (0xc00091e370) Reply frame received for 5\nI0203 14:20:53.045385    2152 log.go:172] (0xc00091e370) Data frame received for 5\nI0203 14:20:53.046132    2152 log.go:172] (0xc0002e0fa0) (5) Data frame handling\nI0203 14:20:53.046208    2152 log.go:172] (0xc0002e0fa0) (5) Data frame sent\nI0203 14:20:53.046229    2152 log.go:172] (0xc00091e370) Data frame received for 5\nI0203 14:20:53.046273    2152 log.go:172] (0xc0002e0fa0) (5) Data frame handling\n+ mvI0203 14:20:53.046368    2152 log.go:172] (0xc0002e0fa0) (5) Data frame sent\nI0203 14:20:53.046386    2152 log.go:172] (0xc00091e370) Data frame received for 5\nI0203 14:20:53.046413    2152 log.go:172] (0xc0002e0fa0) (5) Data frame handling\nI0203 14:20:53.046432    2152 log.go:172] (0xc0002e0fa0) (5) Data frame sent\nI0203 14:20:53.046456    2152 log.go:172] (0xc00091e370) Data frame received for 5\nI0203 14:20:53.046468    2152 log.go:172] (0xc0002e0fa0) (5) Data frame handling\n -v /tmp/index.htmlI0203 14:20:53.046575    2152 log.go:172] (0xc0002e0fa0) (5) Data frame sent\nI0203 14:20:53.046602    2152 log.go:172] (0xc00091e370) Data frame received for 5\nI0203 14:20:53.046625    2152 log.go:172] (0xc0002e0fa0) (5) Data frame handling\nI0203 14:20:53.046635    2152 log.go:172] (0xc0002e0fa0) (5) Data frame sent\nI0203 14:20:53.046681    2152 log.go:172] (0xc00091e370) Data frame received for 5\nI0203 14:20:53.046696    2152 log.go:172] (0xc0002e0fa0) (5) Data frame handling\n /usr/share/nginx/html/\nI0203 14:20:53.046748    2152 log.go:172] (0xc0002e0fa0) (5) Data frame sent\nI0203 14:20:53.046763    2152 log.go:172] (0xc00091e370) Data frame received for 3\nI0203 14:20:53.046782    2152 log.go:172] (0xc0003ff360) (3) Data frame handling\nI0203 14:20:53.046876    2152 log.go:172] (0xc0003ff360) (3) Data frame sent\nI0203 14:20:53.147127    2152 log.go:172] (0xc00091e370) Data frame received for 1\nI0203 14:20:53.147259    2152 log.go:172] (0xc00091e370) (0xc0003ff360) Stream removed, broadcasting: 3\nI0203 14:20:53.147311    2152 log.go:172] (0xc0008ca640) (1) Data frame handling\nI0203 14:20:53.147321    2152 log.go:172] (0xc00091e370) (0xc0002e0fa0) Stream removed, broadcasting: 5\nI0203 14:20:53.147355    2152 log.go:172] (0xc0008ca640) (1) Data frame sent\nI0203 14:20:53.147378    2152 log.go:172] (0xc00091e370) (0xc0008ca640) Stream removed, broadcasting: 1\nI0203 14:20:53.147410    2152 log.go:172] (0xc00091e370) Go away received\nI0203 14:20:53.148170    2152 log.go:172] (0xc00091e370) (0xc0008ca640) Stream removed, broadcasting: 1\nI0203 14:20:53.148201    2152 log.go:172] (0xc00091e370) (0xc0003ff360) Stream removed, broadcasting: 3\nI0203 14:20:53.148205    2152 log.go:172] (0xc00091e370) (0xc0002e0fa0) Stream removed, broadcasting: 5\n"
Feb  3 14:20:53.154: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb  3 14:20:53.154: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb  3 14:20:53.154: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7726 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  3 14:20:53.658: INFO: stderr: "I0203 14:20:53.328848    2173 log.go:172] (0xc000104dc0) (0xc00063e8c0) Create stream\nI0203 14:20:53.328951    2173 log.go:172] (0xc000104dc0) (0xc00063e8c0) Stream added, broadcasting: 1\nI0203 14:20:53.334323    2173 log.go:172] (0xc000104dc0) Reply frame received for 1\nI0203 14:20:53.334364    2173 log.go:172] (0xc000104dc0) (0xc00071a000) Create stream\nI0203 14:20:53.334388    2173 log.go:172] (0xc000104dc0) (0xc00071a000) Stream added, broadcasting: 3\nI0203 14:20:53.336212    2173 log.go:172] (0xc000104dc0) Reply frame received for 3\nI0203 14:20:53.336322    2173 log.go:172] (0xc000104dc0) (0xc00063e960) Create stream\nI0203 14:20:53.336351    2173 log.go:172] (0xc000104dc0) (0xc00063e960) Stream added, broadcasting: 5\nI0203 14:20:53.337703    2173 log.go:172] (0xc000104dc0) Reply frame received for 5\nI0203 14:20:53.438267    2173 log.go:172] (0xc000104dc0) Data frame received for 5\nI0203 14:20:53.438357    2173 log.go:172] (0xc00063e960) (5) Data frame handling\nI0203 14:20:53.438399    2173 log.go:172] (0xc00063e960) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0203 14:20:53.443403    2173 log.go:172] (0xc000104dc0) Data frame received for 3\nI0203 14:20:53.443426    2173 log.go:172] (0xc00071a000) (3) Data frame handling\nI0203 14:20:53.443450    2173 log.go:172] (0xc00071a000) (3) Data frame sent\nI0203 14:20:53.632641    2173 log.go:172] (0xc000104dc0) Data frame received for 1\nI0203 14:20:53.633009    2173 log.go:172] (0xc000104dc0) (0xc00063e960) Stream removed, broadcasting: 5\nI0203 14:20:53.633127    2173 log.go:172] (0xc00063e8c0) (1) Data frame handling\nI0203 14:20:53.633177    2173 log.go:172] (0xc00063e8c0) (1) Data frame sent\nI0203 14:20:53.633290    2173 log.go:172] (0xc000104dc0) (0xc00071a000) Stream removed, broadcasting: 3\nI0203 14:20:53.633361    2173 log.go:172] (0xc000104dc0) (0xc00063e8c0) Stream removed, broadcasting: 1\nI0203 14:20:53.633386    2173 log.go:172] (0xc000104dc0) Go away received\nI0203 14:20:53.634427    2173 log.go:172] (0xc000104dc0) (0xc00063e8c0) Stream removed, broadcasting: 1\nI0203 14:20:53.634458    2173 log.go:172] (0xc000104dc0) (0xc00071a000) Stream removed, broadcasting: 3\nI0203 14:20:53.634481    2173 log.go:172] (0xc000104dc0) (0xc00063e960) Stream removed, broadcasting: 5\n"
Feb  3 14:20:53.659: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb  3 14:20:53.659: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb  3 14:20:53.659: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Feb  3 14:21:33.827: INFO: Deleting all statefulset in ns statefulset-7726
Feb  3 14:21:33.836: INFO: Scaling statefulset ss to 0
Feb  3 14:21:33.865: INFO: Waiting for statefulset status.replicas updated to 0
Feb  3 14:21:33.936: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 14:21:33.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-7726" for this suite.
Feb  3 14:21:40.013: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 14:21:40.114: INFO: namespace statefulset-7726 deletion completed in 6.14085826s

• [SLOW TEST:123.854 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 14:21:40.114: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Feb  3 14:21:49.233: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 14:21:50.264: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-9483" for this suite.
Feb  3 14:22:12.349: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 14:22:12.457: INFO: namespace replicaset-9483 deletion completed in 22.183839092s

• [SLOW TEST:32.342 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 14:22:12.458: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Feb  3 14:22:12.574: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-8822,SelfLink:/api/v1/namespaces/watch-8822/configmaps/e2e-watch-test-configmap-a,UID:648a95fc-0eca-434e-bb22-60be012dfea0,ResourceVersion:22950526,Generation:0,CreationTimestamp:2020-02-03 14:22:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb  3 14:22:12.574: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-8822,SelfLink:/api/v1/namespaces/watch-8822/configmaps/e2e-watch-test-configmap-a,UID:648a95fc-0eca-434e-bb22-60be012dfea0,ResourceVersion:22950526,Generation:0,CreationTimestamp:2020-02-03 14:22:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Feb  3 14:22:22.603: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-8822,SelfLink:/api/v1/namespaces/watch-8822/configmaps/e2e-watch-test-configmap-a,UID:648a95fc-0eca-434e-bb22-60be012dfea0,ResourceVersion:22950539,Generation:0,CreationTimestamp:2020-02-03 14:22:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Feb  3 14:22:22.604: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-8822,SelfLink:/api/v1/namespaces/watch-8822/configmaps/e2e-watch-test-configmap-a,UID:648a95fc-0eca-434e-bb22-60be012dfea0,ResourceVersion:22950539,Generation:0,CreationTimestamp:2020-02-03 14:22:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Feb  3 14:22:32.618: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-8822,SelfLink:/api/v1/namespaces/watch-8822/configmaps/e2e-watch-test-configmap-a,UID:648a95fc-0eca-434e-bb22-60be012dfea0,ResourceVersion:22950553,Generation:0,CreationTimestamp:2020-02-03 14:22:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb  3 14:22:32.619: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-8822,SelfLink:/api/v1/namespaces/watch-8822/configmaps/e2e-watch-test-configmap-a,UID:648a95fc-0eca-434e-bb22-60be012dfea0,ResourceVersion:22950553,Generation:0,CreationTimestamp:2020-02-03 14:22:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Feb  3 14:22:42.657: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-8822,SelfLink:/api/v1/namespaces/watch-8822/configmaps/e2e-watch-test-configmap-a,UID:648a95fc-0eca-434e-bb22-60be012dfea0,ResourceVersion:22950567,Generation:0,CreationTimestamp:2020-02-03 14:22:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb  3 14:22:42.658: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-8822,SelfLink:/api/v1/namespaces/watch-8822/configmaps/e2e-watch-test-configmap-a,UID:648a95fc-0eca-434e-bb22-60be012dfea0,ResourceVersion:22950567,Generation:0,CreationTimestamp:2020-02-03 14:22:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Feb  3 14:22:52.670: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-8822,SelfLink:/api/v1/namespaces/watch-8822/configmaps/e2e-watch-test-configmap-b,UID:31b1fb52-58e6-483f-a357-71ff15268414,ResourceVersion:22950582,Generation:0,CreationTimestamp:2020-02-03 14:22:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb  3 14:22:52.670: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-8822,SelfLink:/api/v1/namespaces/watch-8822/configmaps/e2e-watch-test-configmap-b,UID:31b1fb52-58e6-483f-a357-71ff15268414,ResourceVersion:22950582,Generation:0,CreationTimestamp:2020-02-03 14:22:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Feb  3 14:23:02.681: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-8822,SelfLink:/api/v1/namespaces/watch-8822/configmaps/e2e-watch-test-configmap-b,UID:31b1fb52-58e6-483f-a357-71ff15268414,ResourceVersion:22950596,Generation:0,CreationTimestamp:2020-02-03 14:22:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb  3 14:23:02.681: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-8822,SelfLink:/api/v1/namespaces/watch-8822/configmaps/e2e-watch-test-configmap-b,UID:31b1fb52-58e6-483f-a357-71ff15268414,ResourceVersion:22950596,Generation:0,CreationTimestamp:2020-02-03 14:22:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 14:23:12.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-8822" for this suite.
Feb  3 14:23:18.727: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 14:23:18.819: INFO: namespace watch-8822 deletion completed in 6.124678918s

• [SLOW TEST:66.362 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 14:23:18.820: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W0203 14:23:33.213685       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb  3 14:23:33.213: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 14:23:33.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-8189" for this suite.
Feb  3 14:23:41.759: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 14:23:44.558: INFO: namespace gc-8189 deletion completed in 11.336639363s

• [SLOW TEST:25.739 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 14:23:44.559: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-1bd857d2-3ef4-4da9-b143-e336092b384a
STEP: Creating a pod to test consume secrets
Feb  3 14:23:49.780: INFO: Waiting up to 5m0s for pod "pod-secrets-12b58f16-930c-4d78-89d8-231299a82dd2" in namespace "secrets-913" to be "success or failure"
Feb  3 14:23:49.789: INFO: Pod "pod-secrets-12b58f16-930c-4d78-89d8-231299a82dd2": Phase="Pending", Reason="", readiness=false. Elapsed: 9.247081ms
Feb  3 14:23:51.837: INFO: Pod "pod-secrets-12b58f16-930c-4d78-89d8-231299a82dd2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057212409s
Feb  3 14:23:53.879: INFO: Pod "pod-secrets-12b58f16-930c-4d78-89d8-231299a82dd2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.098971452s
Feb  3 14:23:55.909: INFO: Pod "pod-secrets-12b58f16-930c-4d78-89d8-231299a82dd2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.128769338s
Feb  3 14:23:57.949: INFO: Pod "pod-secrets-12b58f16-930c-4d78-89d8-231299a82dd2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.168577187s
STEP: Saw pod success
Feb  3 14:23:57.949: INFO: Pod "pod-secrets-12b58f16-930c-4d78-89d8-231299a82dd2" satisfied condition "success or failure"
Feb  3 14:23:57.953: INFO: Trying to get logs from node iruya-node pod pod-secrets-12b58f16-930c-4d78-89d8-231299a82dd2 container secret-volume-test: 
STEP: delete the pod
Feb  3 14:23:58.014: INFO: Waiting for pod pod-secrets-12b58f16-930c-4d78-89d8-231299a82dd2 to disappear
Feb  3 14:23:58.033: INFO: Pod pod-secrets-12b58f16-930c-4d78-89d8-231299a82dd2 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 14:23:58.033: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-913" for this suite.
Feb  3 14:24:04.055: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 14:24:04.146: INFO: namespace secrets-913 deletion completed in 6.10884507s
STEP: Destroying namespace "secret-namespace-7105" for this suite.
Feb  3 14:24:10.185: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 14:24:10.350: INFO: namespace secret-namespace-7105 deletion completed in 6.203815954s

• [SLOW TEST:25.791 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 14:24:10.351: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: validating api versions
Feb  3 14:24:10.426: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Feb  3 14:24:10.664: INFO: stderr: ""
Feb  3 14:24:10.664: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 14:24:10.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7781" for this suite.
Feb  3 14:24:16.697: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 14:24:16.861: INFO: namespace kubectl-7781 deletion completed in 6.190654493s

• [SLOW TEST:6.511 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl api-versions
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if v1 is in available api versions  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 14:24:16.862: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb  3 14:24:16.940: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0ce1c1a9-8809-40f1-bbbd-065e66d608e8" in namespace "downward-api-5890" to be "success or failure"
Feb  3 14:24:16.950: INFO: Pod "downwardapi-volume-0ce1c1a9-8809-40f1-bbbd-065e66d608e8": Phase="Pending", Reason="", readiness=false. Elapsed: 9.357102ms
Feb  3 14:24:18.966: INFO: Pod "downwardapi-volume-0ce1c1a9-8809-40f1-bbbd-065e66d608e8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025785867s
Feb  3 14:24:21.005: INFO: Pod "downwardapi-volume-0ce1c1a9-8809-40f1-bbbd-065e66d608e8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.064528047s
Feb  3 14:24:23.023: INFO: Pod "downwardapi-volume-0ce1c1a9-8809-40f1-bbbd-065e66d608e8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.083034693s
Feb  3 14:24:25.032: INFO: Pod "downwardapi-volume-0ce1c1a9-8809-40f1-bbbd-065e66d608e8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.091211305s
STEP: Saw pod success
Feb  3 14:24:25.032: INFO: Pod "downwardapi-volume-0ce1c1a9-8809-40f1-bbbd-065e66d608e8" satisfied condition "success or failure"
Feb  3 14:24:25.038: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-0ce1c1a9-8809-40f1-bbbd-065e66d608e8 container client-container: 
STEP: delete the pod
Feb  3 14:24:25.100: INFO: Waiting for pod downwardapi-volume-0ce1c1a9-8809-40f1-bbbd-065e66d608e8 to disappear
Feb  3 14:24:25.103: INFO: Pod downwardapi-volume-0ce1c1a9-8809-40f1-bbbd-065e66d608e8 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 14:24:25.103: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5890" for this suite.
Feb  3 14:24:31.135: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 14:24:31.209: INFO: namespace downward-api-5890 deletion completed in 6.102243906s

• [SLOW TEST:14.347 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 14:24:31.209: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1721
[It] should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb  3 14:24:31.254: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=kubectl-8490'
Feb  3 14:24:31.471: INFO: stderr: ""
Feb  3 14:24:31.471: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod is running
STEP: verifying the pod e2e-test-nginx-pod was created
Feb  3 14:24:41.524: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=kubectl-8490 -o json'
Feb  3 14:24:41.687: INFO: stderr: ""
Feb  3 14:24:41.687: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2020-02-03T14:24:31Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-nginx-pod\"\n        },\n        \"name\": \"e2e-test-nginx-pod\",\n        \"namespace\": \"kubectl-8490\",\n        \"resourceVersion\": \"22950929\",\n        \"selfLink\": \"/api/v1/namespaces/kubectl-8490/pods/e2e-test-nginx-pod\",\n        \"uid\": \"cba64d01-141c-42f7-b2eb-8262b8a988a4\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/nginx:1.14-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-nginx-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-rbzfd\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"iruya-node\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-rbzfd\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-rbzfd\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-03T14:24:31Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-03T14:24:38Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-03T14:24:38Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-03T14:24:31Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"docker://7794adcc583ec4bb2605c8d37dc6f361145cff099a45f6820a0785490d52df18\",\n                \"image\": \"nginx:1.14-alpine\",\n                \"imageID\": \"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-nginx-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2020-02-03T14:24:37Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"10.96.3.65\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.44.0.1\",\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2020-02-03T14:24:31Z\"\n    }\n}\n"
STEP: replace the image in the pod
Feb  3 14:24:41.687: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-8490'
Feb  3 14:24:42.284: INFO: stderr: ""
Feb  3 14:24:42.284: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n"
STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29
[AfterEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1726
Feb  3 14:24:42.293: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-8490'
Feb  3 14:24:49.606: INFO: stderr: ""
Feb  3 14:24:49.606: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 14:24:49.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8490" for this suite.
Feb  3 14:24:55.765: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 14:24:55.961: INFO: namespace kubectl-8490 deletion completed in 6.34437252s

• [SLOW TEST:24.752 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should update a single-container pod's image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 14:24:55.961: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Feb  3 14:24:56.024: INFO: Waiting up to 5m0s for pod "downward-api-c3936200-98d3-4538-99eb-d02729c2e065" in namespace "downward-api-3094" to be "success or failure"
Feb  3 14:24:56.077: INFO: Pod "downward-api-c3936200-98d3-4538-99eb-d02729c2e065": Phase="Pending", Reason="", readiness=false. Elapsed: 52.918394ms
Feb  3 14:24:58.084: INFO: Pod "downward-api-c3936200-98d3-4538-99eb-d02729c2e065": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060092556s
Feb  3 14:25:00.095: INFO: Pod "downward-api-c3936200-98d3-4538-99eb-d02729c2e065": Phase="Pending", Reason="", readiness=false. Elapsed: 4.070580818s
Feb  3 14:25:02.105: INFO: Pod "downward-api-c3936200-98d3-4538-99eb-d02729c2e065": Phase="Pending", Reason="", readiness=false. Elapsed: 6.080489796s
Feb  3 14:25:04.111: INFO: Pod "downward-api-c3936200-98d3-4538-99eb-d02729c2e065": Phase="Pending", Reason="", readiness=false. Elapsed: 8.086666254s
Feb  3 14:25:06.116: INFO: Pod "downward-api-c3936200-98d3-4538-99eb-d02729c2e065": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.09213599s
STEP: Saw pod success
Feb  3 14:25:06.116: INFO: Pod "downward-api-c3936200-98d3-4538-99eb-d02729c2e065" satisfied condition "success or failure"
Feb  3 14:25:06.120: INFO: Trying to get logs from node iruya-node pod downward-api-c3936200-98d3-4538-99eb-d02729c2e065 container dapi-container: 
STEP: delete the pod
Feb  3 14:25:06.164: INFO: Waiting for pod downward-api-c3936200-98d3-4538-99eb-d02729c2e065 to disappear
Feb  3 14:25:06.170: INFO: Pod downward-api-c3936200-98d3-4538-99eb-d02729c2e065 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 14:25:06.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3094" for this suite.
Feb  3 14:25:12.203: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 14:25:12.291: INFO: namespace downward-api-3094 deletion completed in 6.115797157s

• [SLOW TEST:16.329 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 14:25:12.291: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir volume type on tmpfs
Feb  3 14:25:12.550: INFO: Waiting up to 5m0s for pod "pod-c32790f5-a082-4693-a32d-f42bb8429f02" in namespace "emptydir-9376" to be "success or failure"
Feb  3 14:25:12.638: INFO: Pod "pod-c32790f5-a082-4693-a32d-f42bb8429f02": Phase="Pending", Reason="", readiness=false. Elapsed: 87.758706ms
Feb  3 14:25:14.789: INFO: Pod "pod-c32790f5-a082-4693-a32d-f42bb8429f02": Phase="Pending", Reason="", readiness=false. Elapsed: 2.238208154s
Feb  3 14:25:16.797: INFO: Pod "pod-c32790f5-a082-4693-a32d-f42bb8429f02": Phase="Pending", Reason="", readiness=false. Elapsed: 4.246550932s
Feb  3 14:25:18.813: INFO: Pod "pod-c32790f5-a082-4693-a32d-f42bb8429f02": Phase="Pending", Reason="", readiness=false. Elapsed: 6.262843856s
Feb  3 14:25:20.818: INFO: Pod "pod-c32790f5-a082-4693-a32d-f42bb8429f02": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.267630716s
STEP: Saw pod success
Feb  3 14:25:20.818: INFO: Pod "pod-c32790f5-a082-4693-a32d-f42bb8429f02" satisfied condition "success or failure"
Feb  3 14:25:20.822: INFO: Trying to get logs from node iruya-node pod pod-c32790f5-a082-4693-a32d-f42bb8429f02 container test-container: 
STEP: delete the pod
Feb  3 14:25:20.899: INFO: Waiting for pod pod-c32790f5-a082-4693-a32d-f42bb8429f02 to disappear
Feb  3 14:25:20.908: INFO: Pod pod-c32790f5-a082-4693-a32d-f42bb8429f02 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 14:25:20.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9376" for this suite.
Feb  3 14:25:26.955: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 14:25:27.042: INFO: namespace emptydir-9376 deletion completed in 6.126688448s

• [SLOW TEST:14.751 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 14:25:27.042: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-projected-all-test-volume-66cb7529-a530-480b-afca-610288e60bf9
STEP: Creating secret with name secret-projected-all-test-volume-38947022-cda5-4a85-b116-94ef0d81baaa
STEP: Creating a pod to test Check all projections for projected volume plugin
Feb  3 14:25:27.133: INFO: Waiting up to 5m0s for pod "projected-volume-664d0f15-27dc-4578-8782-b4bf98943312" in namespace "projected-1720" to be "success or failure"
Feb  3 14:25:27.144: INFO: Pod "projected-volume-664d0f15-27dc-4578-8782-b4bf98943312": Phase="Pending", Reason="", readiness=false. Elapsed: 10.862234ms
Feb  3 14:25:29.161: INFO: Pod "projected-volume-664d0f15-27dc-4578-8782-b4bf98943312": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028146881s
Feb  3 14:25:31.775: INFO: Pod "projected-volume-664d0f15-27dc-4578-8782-b4bf98943312": Phase="Pending", Reason="", readiness=false. Elapsed: 4.642140975s
Feb  3 14:25:33.792: INFO: Pod "projected-volume-664d0f15-27dc-4578-8782-b4bf98943312": Phase="Pending", Reason="", readiness=false. Elapsed: 6.659129516s
Feb  3 14:25:35.809: INFO: Pod "projected-volume-664d0f15-27dc-4578-8782-b4bf98943312": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.676114027s
STEP: Saw pod success
Feb  3 14:25:35.810: INFO: Pod "projected-volume-664d0f15-27dc-4578-8782-b4bf98943312" satisfied condition "success or failure"
Feb  3 14:25:35.835: INFO: Trying to get logs from node iruya-node pod projected-volume-664d0f15-27dc-4578-8782-b4bf98943312 container projected-all-volume-test: 
STEP: delete the pod
Feb  3 14:25:36.012: INFO: Waiting for pod projected-volume-664d0f15-27dc-4578-8782-b4bf98943312 to disappear
Feb  3 14:25:36.032: INFO: Pod projected-volume-664d0f15-27dc-4578-8782-b4bf98943312 no longer exists
[AfterEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 14:25:36.033: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1720" for this suite.
Feb  3 14:25:42.121: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 14:25:42.273: INFO: namespace projected-1720 deletion completed in 6.229672992s

• [SLOW TEST:15.231 seconds]
[sig-storage] Projected combined
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 14:25:42.273: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating service endpoint-test2 in namespace services-5841
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5841 to expose endpoints map[]
Feb  3 14:25:42.460: INFO: Get endpoints failed (6.356384ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found
Feb  3 14:25:43.502: INFO: successfully validated that service endpoint-test2 in namespace services-5841 exposes endpoints map[] (1.048022193s elapsed)
STEP: Creating pod pod1 in namespace services-5841
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5841 to expose endpoints map[pod1:[80]]
Feb  3 14:25:47.718: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.185880592s elapsed, will retry)
Feb  3 14:25:50.779: INFO: successfully validated that service endpoint-test2 in namespace services-5841 exposes endpoints map[pod1:[80]] (7.247163235s elapsed)
STEP: Creating pod pod2 in namespace services-5841
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5841 to expose endpoints map[pod1:[80] pod2:[80]]
Feb  3 14:25:54.974: INFO: Unexpected endpoints: found map[1d20bb04-6a68-45c6-ab63-461525a3a631:[80]], expected map[pod1:[80] pod2:[80]] (4.133330448s elapsed, will retry)
Feb  3 14:25:59.713: INFO: successfully validated that service endpoint-test2 in namespace services-5841 exposes endpoints map[pod1:[80] pod2:[80]] (8.872133134s elapsed)
STEP: Deleting pod pod1 in namespace services-5841
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5841 to expose endpoints map[pod2:[80]]
Feb  3 14:25:59.769: INFO: successfully validated that service endpoint-test2 in namespace services-5841 exposes endpoints map[pod2:[80]] (33.746491ms elapsed)
STEP: Deleting pod pod2 in namespace services-5841
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5841 to expose endpoints map[]
Feb  3 14:26:00.811: INFO: successfully validated that service endpoint-test2 in namespace services-5841 exposes endpoints map[] (1.017668668s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 14:26:00.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-5841" for this suite.
Feb  3 14:26:22.957: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 14:26:23.048: INFO: namespace services-5841 deletion completed in 22.156364287s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:40.775 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 14:26:23.048: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 14:27:10.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-9025" for this suite.
Feb  3 14:27:16.705: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 14:27:16.815: INFO: namespace container-runtime-9025 deletion completed in 6.128656185s

• [SLOW TEST:53.767 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    when starting a container that exits
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39
      should run with the expected status [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 14:27:16.815: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap that has name configmap-test-emptyKey-174230ce-afcc-4d88-b596-78021125f3a9
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 14:27:16.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9785" for this suite.
Feb  3 14:27:22.936: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 14:27:23.050: INFO: namespace configmap-9785 deletion completed in 6.133088643s

• [SLOW TEST:6.235 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run deployment 
  should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 14:27:23.050: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1557
[It] should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb  3 14:27:23.110: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/apps.v1 --namespace=kubectl-773'
Feb  3 14:27:23.289: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb  3 14:27:23.289: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the deployment e2e-test-nginx-deployment was created
STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created
[AfterEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1562
Feb  3 14:27:27.324: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-773'
Feb  3 14:27:27.445: INFO: stderr: ""
Feb  3 14:27:27.445: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 14:27:27.446: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-773" for this suite.
Feb  3 14:27:33.523: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 14:27:33.689: INFO: namespace kubectl-773 deletion completed in 6.235832689s

• [SLOW TEST:10.639 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 14:27:33.690: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-map-18cb9c45-4532-4155-af4e-a7e2aef63661
STEP: Creating a pod to test consume secrets
Feb  3 14:27:34.005: INFO: Waiting up to 5m0s for pod "pod-secrets-20dd7db4-907b-415f-9dd8-cf99828fea88" in namespace "secrets-5587" to be "success or failure"
Feb  3 14:27:34.014: INFO: Pod "pod-secrets-20dd7db4-907b-415f-9dd8-cf99828fea88": Phase="Pending", Reason="", readiness=false. Elapsed: 8.357401ms
Feb  3 14:27:36.018: INFO: Pod "pod-secrets-20dd7db4-907b-415f-9dd8-cf99828fea88": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013300905s
Feb  3 14:27:38.037: INFO: Pod "pod-secrets-20dd7db4-907b-415f-9dd8-cf99828fea88": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032094716s
Feb  3 14:27:40.055: INFO: Pod "pod-secrets-20dd7db4-907b-415f-9dd8-cf99828fea88": Phase="Running", Reason="", readiness=true. Elapsed: 6.049938553s
Feb  3 14:27:42.083: INFO: Pod "pod-secrets-20dd7db4-907b-415f-9dd8-cf99828fea88": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.077877461s
STEP: Saw pod success
Feb  3 14:27:42.084: INFO: Pod "pod-secrets-20dd7db4-907b-415f-9dd8-cf99828fea88" satisfied condition "success or failure"
Feb  3 14:27:42.101: INFO: Trying to get logs from node iruya-node pod pod-secrets-20dd7db4-907b-415f-9dd8-cf99828fea88 container secret-volume-test: 
STEP: delete the pod
Feb  3 14:27:42.454: INFO: Waiting for pod pod-secrets-20dd7db4-907b-415f-9dd8-cf99828fea88 to disappear
Feb  3 14:27:42.472: INFO: Pod pod-secrets-20dd7db4-907b-415f-9dd8-cf99828fea88 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 14:27:42.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5587" for this suite.
Feb  3 14:27:48.552: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 14:27:48.676: INFO: namespace secrets-5587 deletion completed in 6.177579378s

• [SLOW TEST:14.986 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 14:27:48.676: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb  3 14:27:48.833: INFO: Waiting up to 5m0s for pod "downwardapi-volume-deb9c09e-3efd-42d5-b3cf-43050d843ff9" in namespace "downward-api-988" to be "success or failure"
Feb  3 14:27:48.910: INFO: Pod "downwardapi-volume-deb9c09e-3efd-42d5-b3cf-43050d843ff9": Phase="Pending", Reason="", readiness=false. Elapsed: 76.73327ms
Feb  3 14:27:50.925: INFO: Pod "downwardapi-volume-deb9c09e-3efd-42d5-b3cf-43050d843ff9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.091986178s
Feb  3 14:27:52.936: INFO: Pod "downwardapi-volume-deb9c09e-3efd-42d5-b3cf-43050d843ff9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.10253588s
Feb  3 14:27:54.946: INFO: Pod "downwardapi-volume-deb9c09e-3efd-42d5-b3cf-43050d843ff9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.113429832s
Feb  3 14:27:57.006: INFO: Pod "downwardapi-volume-deb9c09e-3efd-42d5-b3cf-43050d843ff9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.172726558s
STEP: Saw pod success
Feb  3 14:27:57.006: INFO: Pod "downwardapi-volume-deb9c09e-3efd-42d5-b3cf-43050d843ff9" satisfied condition "success or failure"
Feb  3 14:27:57.069: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-deb9c09e-3efd-42d5-b3cf-43050d843ff9 container client-container: 
STEP: delete the pod
Feb  3 14:27:57.211: INFO: Waiting for pod downwardapi-volume-deb9c09e-3efd-42d5-b3cf-43050d843ff9 to disappear
Feb  3 14:27:57.218: INFO: Pod downwardapi-volume-deb9c09e-3efd-42d5-b3cf-43050d843ff9 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 14:27:57.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-988" for this suite.
Feb  3 14:28:03.238: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 14:28:03.357: INFO: namespace downward-api-988 deletion completed in 6.133856632s

• [SLOW TEST:14.681 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 14:28:03.358: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb  3 14:28:03.490: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fd337ea5-3f18-47c3-a5b4-b347fc2c32a8" in namespace "projected-9456" to be "success or failure"
Feb  3 14:28:03.496: INFO: Pod "downwardapi-volume-fd337ea5-3f18-47c3-a5b4-b347fc2c32a8": Phase="Pending", Reason="", readiness=false. Elapsed: 5.678597ms
Feb  3 14:28:05.508: INFO: Pod "downwardapi-volume-fd337ea5-3f18-47c3-a5b4-b347fc2c32a8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017455905s
Feb  3 14:28:07.519: INFO: Pod "downwardapi-volume-fd337ea5-3f18-47c3-a5b4-b347fc2c32a8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028324497s
Feb  3 14:28:09.529: INFO: Pod "downwardapi-volume-fd337ea5-3f18-47c3-a5b4-b347fc2c32a8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.038274885s
Feb  3 14:28:11.541: INFO: Pod "downwardapi-volume-fd337ea5-3f18-47c3-a5b4-b347fc2c32a8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.050537576s
STEP: Saw pod success
Feb  3 14:28:11.541: INFO: Pod "downwardapi-volume-fd337ea5-3f18-47c3-a5b4-b347fc2c32a8" satisfied condition "success or failure"
Feb  3 14:28:11.550: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-fd337ea5-3f18-47c3-a5b4-b347fc2c32a8 container client-container: 
STEP: delete the pod
Feb  3 14:28:11.675: INFO: Waiting for pod downwardapi-volume-fd337ea5-3f18-47c3-a5b4-b347fc2c32a8 to disappear
Feb  3 14:28:11.683: INFO: Pod downwardapi-volume-fd337ea5-3f18-47c3-a5b4-b347fc2c32a8 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 14:28:11.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9456" for this suite.
Feb  3 14:28:17.714: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 14:28:17.846: INFO: namespace projected-9456 deletion completed in 6.158116867s

• [SLOW TEST:14.489 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 14:28:17.848: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb  3 14:28:26.109: INFO: Waiting up to 5m0s for pod "client-envvars-4eb751b1-47c9-4af4-886e-a62cecb945ce" in namespace "pods-2770" to be "success or failure"
Feb  3 14:28:26.118: INFO: Pod "client-envvars-4eb751b1-47c9-4af4-886e-a62cecb945ce": Phase="Pending", Reason="", readiness=false. Elapsed: 8.616502ms
Feb  3 14:28:28.129: INFO: Pod "client-envvars-4eb751b1-47c9-4af4-886e-a62cecb945ce": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019599038s
Feb  3 14:28:30.140: INFO: Pod "client-envvars-4eb751b1-47c9-4af4-886e-a62cecb945ce": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031313409s
Feb  3 14:28:32.150: INFO: Pod "client-envvars-4eb751b1-47c9-4af4-886e-a62cecb945ce": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041102333s
Feb  3 14:28:34.159: INFO: Pod "client-envvars-4eb751b1-47c9-4af4-886e-a62cecb945ce": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.049691827s
STEP: Saw pod success
Feb  3 14:28:34.159: INFO: Pod "client-envvars-4eb751b1-47c9-4af4-886e-a62cecb945ce" satisfied condition "success or failure"
Feb  3 14:28:34.161: INFO: Trying to get logs from node iruya-node pod client-envvars-4eb751b1-47c9-4af4-886e-a62cecb945ce container env3cont: 
STEP: delete the pod
Feb  3 14:28:34.230: INFO: Waiting for pod client-envvars-4eb751b1-47c9-4af4-886e-a62cecb945ce to disappear
Feb  3 14:28:34.235: INFO: Pod client-envvars-4eb751b1-47c9-4af4-886e-a62cecb945ce no longer exists
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 14:28:34.235: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-2770" for this suite.
Feb  3 14:29:20.256: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 14:29:20.370: INFO: namespace pods-2770 deletion completed in 46.130625081s

• [SLOW TEST:62.522 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 14:29:20.371: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-2511
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb  3 14:29:20.479: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Feb  3 14:29:52.708: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:pod-network-test-2511 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  3 14:29:52.708: INFO: >>> kubeConfig: /root/.kube/config
I0203 14:29:52.797170       8 log.go:172] (0xc002575a20) (0xc0016a2e60) Create stream
I0203 14:29:52.797276       8 log.go:172] (0xc002575a20) (0xc0016a2e60) Stream added, broadcasting: 1
I0203 14:29:52.808629       8 log.go:172] (0xc002575a20) Reply frame received for 1
I0203 14:29:52.808727       8 log.go:172] (0xc002575a20) (0xc0026a81e0) Create stream
I0203 14:29:52.808747       8 log.go:172] (0xc002575a20) (0xc0026a81e0) Stream added, broadcasting: 3
I0203 14:29:52.815192       8 log.go:172] (0xc002575a20) Reply frame received for 3
I0203 14:29:52.815310       8 log.go:172] (0xc002575a20) (0xc0016a2f00) Create stream
I0203 14:29:52.815408       8 log.go:172] (0xc002575a20) (0xc0016a2f00) Stream added, broadcasting: 5
I0203 14:29:52.817479       8 log.go:172] (0xc002575a20) Reply frame received for 5
I0203 14:29:53.063619       8 log.go:172] (0xc002575a20) Data frame received for 3
I0203 14:29:53.063739       8 log.go:172] (0xc0026a81e0) (3) Data frame handling
I0203 14:29:53.063795       8 log.go:172] (0xc0026a81e0) (3) Data frame sent
I0203 14:29:53.184762       8 log.go:172] (0xc002575a20) Data frame received for 1
I0203 14:29:53.184902       8 log.go:172] (0xc0016a2e60) (1) Data frame handling
I0203 14:29:53.184991       8 log.go:172] (0xc0016a2e60) (1) Data frame sent
I0203 14:29:53.185029       8 log.go:172] (0xc002575a20) (0xc0016a2e60) Stream removed, broadcasting: 1
I0203 14:29:53.185431       8 log.go:172] (0xc002575a20) (0xc0026a81e0) Stream removed, broadcasting: 3
I0203 14:29:53.185702       8 log.go:172] (0xc002575a20) (0xc0016a2f00) Stream removed, broadcasting: 5
I0203 14:29:53.185806       8 log.go:172] (0xc002575a20) (0xc0016a2e60) Stream removed, broadcasting: 1
I0203 14:29:53.185827       8 log.go:172] (0xc002575a20) (0xc0026a81e0) Stream removed, broadcasting: 3
I0203 14:29:53.185844       8 log.go:172] (0xc002575a20) (0xc0016a2f00) Stream removed, broadcasting: 5
Feb  3 14:29:53.186: INFO: Waiting for endpoints: map[]
I0203 14:29:53.187132       8 log.go:172] (0xc002575a20) Go away received
Feb  3 14:29:53.196: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=http&host=10.44.0.1&port=8080&tries=1'] Namespace:pod-network-test-2511 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  3 14:29:53.196: INFO: >>> kubeConfig: /root/.kube/config
I0203 14:29:53.270662       8 log.go:172] (0xc0014ca9a0) (0xc0026a8460) Create stream
I0203 14:29:53.271149       8 log.go:172] (0xc0014ca9a0) (0xc0026a8460) Stream added, broadcasting: 1
I0203 14:29:53.284648       8 log.go:172] (0xc0014ca9a0) Reply frame received for 1
I0203 14:29:53.284703       8 log.go:172] (0xc0014ca9a0) (0xc0030a0a00) Create stream
I0203 14:29:53.284710       8 log.go:172] (0xc0014ca9a0) (0xc0030a0a00) Stream added, broadcasting: 3
I0203 14:29:53.285957       8 log.go:172] (0xc0014ca9a0) Reply frame received for 3
I0203 14:29:53.286004       8 log.go:172] (0xc0014ca9a0) (0xc0013dc000) Create stream
I0203 14:29:53.286012       8 log.go:172] (0xc0014ca9a0) (0xc0013dc000) Stream added, broadcasting: 5
I0203 14:29:53.287161       8 log.go:172] (0xc0014ca9a0) Reply frame received for 5
I0203 14:29:53.479111       8 log.go:172] (0xc0014ca9a0) Data frame received for 3
I0203 14:29:53.479255       8 log.go:172] (0xc0030a0a00) (3) Data frame handling
I0203 14:29:53.479313       8 log.go:172] (0xc0030a0a00) (3) Data frame sent
I0203 14:29:53.645168       8 log.go:172] (0xc0014ca9a0) (0xc0030a0a00) Stream removed, broadcasting: 3
I0203 14:29:53.645337       8 log.go:172] (0xc0014ca9a0) Data frame received for 1
I0203 14:29:53.645384       8 log.go:172] (0xc0026a8460) (1) Data frame handling
I0203 14:29:53.645461       8 log.go:172] (0xc0026a8460) (1) Data frame sent
I0203 14:29:53.645544       8 log.go:172] (0xc0014ca9a0) (0xc0013dc000) Stream removed, broadcasting: 5
I0203 14:29:53.645587       8 log.go:172] (0xc0014ca9a0) (0xc0026a8460) Stream removed, broadcasting: 1
I0203 14:29:53.645607       8 log.go:172] (0xc0014ca9a0) Go away received
I0203 14:29:53.645987       8 log.go:172] (0xc0014ca9a0) (0xc0026a8460) Stream removed, broadcasting: 1
I0203 14:29:53.646011       8 log.go:172] (0xc0014ca9a0) (0xc0030a0a00) Stream removed, broadcasting: 3
I0203 14:29:53.646038       8 log.go:172] (0xc0014ca9a0) (0xc0013dc000) Stream removed, broadcasting: 5
Feb  3 14:29:53.646: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 14:29:53.646: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-2511" for this suite.
Feb  3 14:30:15.676: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 14:30:15.812: INFO: namespace pod-network-test-2511 deletion completed in 22.156191188s

• [SLOW TEST:55.442 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 14:30:15.813: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb  3 14:30:15.943: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Feb  3 14:30:15.971: INFO: Number of nodes with available pods: 0
Feb  3 14:30:15.971: INFO: Node iruya-node is running more than one daemon pod
Feb  3 14:30:18.378: INFO: Number of nodes with available pods: 0
Feb  3 14:30:18.378: INFO: Node iruya-node is running more than one daemon pod
Feb  3 14:30:18.987: INFO: Number of nodes with available pods: 0
Feb  3 14:30:18.987: INFO: Node iruya-node is running more than one daemon pod
Feb  3 14:30:20.037: INFO: Number of nodes with available pods: 0
Feb  3 14:30:20.038: INFO: Node iruya-node is running more than one daemon pod
Feb  3 14:30:20.980: INFO: Number of nodes with available pods: 0
Feb  3 14:30:20.980: INFO: Node iruya-node is running more than one daemon pod
Feb  3 14:30:23.281: INFO: Number of nodes with available pods: 0
Feb  3 14:30:23.281: INFO: Node iruya-node is running more than one daemon pod
Feb  3 14:30:24.002: INFO: Number of nodes with available pods: 0
Feb  3 14:30:24.002: INFO: Node iruya-node is running more than one daemon pod
Feb  3 14:30:25.013: INFO: Number of nodes with available pods: 0
Feb  3 14:30:25.013: INFO: Node iruya-node is running more than one daemon pod
Feb  3 14:30:25.986: INFO: Number of nodes with available pods: 1
Feb  3 14:30:25.986: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb  3 14:30:27.060: INFO: Number of nodes with available pods: 2
Feb  3 14:30:27.060: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Feb  3 14:30:27.105: INFO: Wrong image for pod: daemon-set-lctsc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  3 14:30:27.105: INFO: Wrong image for pod: daemon-set-nk89m. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  3 14:30:28.128: INFO: Wrong image for pod: daemon-set-lctsc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  3 14:30:28.128: INFO: Wrong image for pod: daemon-set-nk89m. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  3 14:30:29.130: INFO: Wrong image for pod: daemon-set-lctsc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  3 14:30:29.130: INFO: Wrong image for pod: daemon-set-nk89m. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  3 14:30:30.267: INFO: Wrong image for pod: daemon-set-lctsc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  3 14:30:30.267: INFO: Wrong image for pod: daemon-set-nk89m. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  3 14:30:31.795: INFO: Wrong image for pod: daemon-set-lctsc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  3 14:30:31.796: INFO: Wrong image for pod: daemon-set-nk89m. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  3 14:30:32.125: INFO: Wrong image for pod: daemon-set-lctsc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  3 14:30:32.126: INFO: Wrong image for pod: daemon-set-nk89m. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  3 14:30:33.130: INFO: Wrong image for pod: daemon-set-lctsc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  3 14:30:33.130: INFO: Wrong image for pod: daemon-set-nk89m. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  3 14:30:34.133: INFO: Wrong image for pod: daemon-set-lctsc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  3 14:30:34.133: INFO: Wrong image for pod: daemon-set-nk89m. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  3 14:30:34.133: INFO: Pod daemon-set-nk89m is not available
Feb  3 14:30:35.130: INFO: Pod daemon-set-4sfhs is not available
Feb  3 14:30:35.130: INFO: Wrong image for pod: daemon-set-lctsc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  3 14:30:36.314: INFO: Pod daemon-set-4sfhs is not available
Feb  3 14:30:36.314: INFO: Wrong image for pod: daemon-set-lctsc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  3 14:30:37.127: INFO: Pod daemon-set-4sfhs is not available
Feb  3 14:30:37.127: INFO: Wrong image for pod: daemon-set-lctsc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  3 14:30:38.133: INFO: Pod daemon-set-4sfhs is not available
Feb  3 14:30:38.133: INFO: Wrong image for pod: daemon-set-lctsc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  3 14:30:39.238: INFO: Pod daemon-set-4sfhs is not available
Feb  3 14:30:39.238: INFO: Wrong image for pod: daemon-set-lctsc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  3 14:30:40.195: INFO: Pod daemon-set-4sfhs is not available
Feb  3 14:30:40.196: INFO: Wrong image for pod: daemon-set-lctsc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  3 14:30:41.137: INFO: Wrong image for pod: daemon-set-lctsc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  3 14:30:42.135: INFO: Wrong image for pod: daemon-set-lctsc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  3 14:30:43.130: INFO: Wrong image for pod: daemon-set-lctsc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  3 14:30:44.129: INFO: Wrong image for pod: daemon-set-lctsc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  3 14:30:45.130: INFO: Wrong image for pod: daemon-set-lctsc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  3 14:30:46.133: INFO: Wrong image for pod: daemon-set-lctsc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  3 14:30:46.133: INFO: Pod daemon-set-lctsc is not available
Feb  3 14:30:47.133: INFO: Wrong image for pod: daemon-set-lctsc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  3 14:30:47.133: INFO: Pod daemon-set-lctsc is not available
Feb  3 14:30:48.134: INFO: Wrong image for pod: daemon-set-lctsc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  3 14:30:48.134: INFO: Pod daemon-set-lctsc is not available
Feb  3 14:30:49.134: INFO: Wrong image for pod: daemon-set-lctsc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  3 14:30:49.135: INFO: Pod daemon-set-lctsc is not available
Feb  3 14:30:50.169: INFO: Wrong image for pod: daemon-set-lctsc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  3 14:30:50.169: INFO: Pod daemon-set-lctsc is not available
Feb  3 14:30:51.126: INFO: Wrong image for pod: daemon-set-lctsc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  3 14:30:51.127: INFO: Pod daemon-set-lctsc is not available
Feb  3 14:30:52.136: INFO: Wrong image for pod: daemon-set-lctsc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  3 14:30:52.136: INFO: Pod daemon-set-lctsc is not available
Feb  3 14:30:53.132: INFO: Wrong image for pod: daemon-set-lctsc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  3 14:30:53.132: INFO: Pod daemon-set-lctsc is not available
Feb  3 14:30:54.169: INFO: Wrong image for pod: daemon-set-lctsc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  3 14:30:54.169: INFO: Pod daemon-set-lctsc is not available
Feb  3 14:30:55.133: INFO: Wrong image for pod: daemon-set-lctsc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  3 14:30:55.133: INFO: Pod daemon-set-lctsc is not available
Feb  3 14:30:56.129: INFO: Wrong image for pod: daemon-set-lctsc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  3 14:30:56.129: INFO: Pod daemon-set-lctsc is not available
Feb  3 14:30:57.135: INFO: Pod daemon-set-vrkzh is not available
STEP: Check that daemon pods are still running on every node of the cluster.
Feb  3 14:30:57.149: INFO: Number of nodes with available pods: 1
Feb  3 14:30:57.149: INFO: Node iruya-node is running more than one daemon pod
Feb  3 14:30:58.164: INFO: Number of nodes with available pods: 1
Feb  3 14:30:58.164: INFO: Node iruya-node is running more than one daemon pod
Feb  3 14:30:59.161: INFO: Number of nodes with available pods: 1
Feb  3 14:30:59.161: INFO: Node iruya-node is running more than one daemon pod
Feb  3 14:31:00.165: INFO: Number of nodes with available pods: 1
Feb  3 14:31:00.166: INFO: Node iruya-node is running more than one daemon pod
Feb  3 14:31:01.168: INFO: Number of nodes with available pods: 1
Feb  3 14:31:01.168: INFO: Node iruya-node is running more than one daemon pod
Feb  3 14:31:02.267: INFO: Number of nodes with available pods: 1
Feb  3 14:31:02.267: INFO: Node iruya-node is running more than one daemon pod
Feb  3 14:31:03.169: INFO: Number of nodes with available pods: 1
Feb  3 14:31:03.169: INFO: Node iruya-node is running more than one daemon pod
Feb  3 14:31:04.167: INFO: Number of nodes with available pods: 2
Feb  3 14:31:04.167: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8532, will wait for the garbage collector to delete the pods
Feb  3 14:31:04.279: INFO: Deleting DaemonSet.extensions daemon-set took: 15.255105ms
Feb  3 14:31:04.580: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.750881ms
Feb  3 14:31:10.787: INFO: Number of nodes with available pods: 0
Feb  3 14:31:10.788: INFO: Number of running nodes: 0, number of available pods: 0
Feb  3 14:31:10.792: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8532/daemonsets","resourceVersion":"22951963"},"items":null}

Feb  3 14:31:10.796: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8532/pods","resourceVersion":"22951963"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 14:31:10.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-8532" for this suite.
Feb  3 14:31:16.868: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 14:31:16.982: INFO: namespace daemonsets-8532 deletion completed in 6.168061013s

• [SLOW TEST:61.169 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 14:31:16.983: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Feb  3 14:31:17.722: INFO: Pod name wrapped-volume-race-399a67c2-9c15-468a-af6f-fb5c3ff7d81c: Found 1 pods out of 5
Feb  3 14:31:22.741: INFO: Pod name wrapped-volume-race-399a67c2-9c15-468a-af6f-fb5c3ff7d81c: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-399a67c2-9c15-468a-af6f-fb5c3ff7d81c in namespace emptydir-wrapper-2336, will wait for the garbage collector to delete the pods
Feb  3 14:31:50.872: INFO: Deleting ReplicationController wrapped-volume-race-399a67c2-9c15-468a-af6f-fb5c3ff7d81c took: 31.464833ms
Feb  3 14:31:51.173: INFO: Terminating ReplicationController wrapped-volume-race-399a67c2-9c15-468a-af6f-fb5c3ff7d81c pods took: 300.939989ms
STEP: Creating RC which spawns configmap-volume pods
Feb  3 14:32:37.622: INFO: Pod name wrapped-volume-race-52ffdd0a-606f-416c-bf85-6b4d69142bcf: Found 0 pods out of 5
Feb  3 14:32:42.653: INFO: Pod name wrapped-volume-race-52ffdd0a-606f-416c-bf85-6b4d69142bcf: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-52ffdd0a-606f-416c-bf85-6b4d69142bcf in namespace emptydir-wrapper-2336, will wait for the garbage collector to delete the pods
Feb  3 14:33:16.830: INFO: Deleting ReplicationController wrapped-volume-race-52ffdd0a-606f-416c-bf85-6b4d69142bcf took: 12.227032ms
Feb  3 14:33:17.231: INFO: Terminating ReplicationController wrapped-volume-race-52ffdd0a-606f-416c-bf85-6b4d69142bcf pods took: 400.658517ms
STEP: Creating RC which spawns configmap-volume pods
Feb  3 14:33:58.807: INFO: Pod name wrapped-volume-race-b9dc11c8-371b-4cbd-be41-6c4da5f8259c: Found 0 pods out of 5
Feb  3 14:34:03.838: INFO: Pod name wrapped-volume-race-b9dc11c8-371b-4cbd-be41-6c4da5f8259c: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-b9dc11c8-371b-4cbd-be41-6c4da5f8259c in namespace emptydir-wrapper-2336, will wait for the garbage collector to delete the pods
Feb  3 14:34:33.978: INFO: Deleting ReplicationController wrapped-volume-race-b9dc11c8-371b-4cbd-be41-6c4da5f8259c took: 15.749008ms
Feb  3 14:34:34.379: INFO: Terminating ReplicationController wrapped-volume-race-b9dc11c8-371b-4cbd-be41-6c4da5f8259c pods took: 400.704437ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 14:35:18.604: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-2336" for this suite.
Feb  3 14:35:28.661: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 14:35:28.794: INFO: namespace emptydir-wrapper-2336 deletion completed in 10.16350083s

• [SLOW TEST:251.811 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 14:35:28.794: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-015ae530-6849-40fe-92a7-134d3f69ca4e
STEP: Creating a pod to test consume secrets
Feb  3 14:35:28.980: INFO: Waiting up to 5m0s for pod "pod-secrets-0703fdc0-bdc1-4eec-88e0-f7e0eb8a4e20" in namespace "secrets-749" to be "success or failure"
Feb  3 14:35:28.985: INFO: Pod "pod-secrets-0703fdc0-bdc1-4eec-88e0-f7e0eb8a4e20": Phase="Pending", Reason="", readiness=false. Elapsed: 5.781854ms
Feb  3 14:35:30.998: INFO: Pod "pod-secrets-0703fdc0-bdc1-4eec-88e0-f7e0eb8a4e20": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018314064s
Feb  3 14:35:33.008: INFO: Pod "pod-secrets-0703fdc0-bdc1-4eec-88e0-f7e0eb8a4e20": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028420675s
Feb  3 14:35:35.015: INFO: Pod "pod-secrets-0703fdc0-bdc1-4eec-88e0-f7e0eb8a4e20": Phase="Pending", Reason="", readiness=false. Elapsed: 6.035541427s
Feb  3 14:35:37.023: INFO: Pod "pod-secrets-0703fdc0-bdc1-4eec-88e0-f7e0eb8a4e20": Phase="Pending", Reason="", readiness=false. Elapsed: 8.04386707s
Feb  3 14:35:39.030: INFO: Pod "pod-secrets-0703fdc0-bdc1-4eec-88e0-f7e0eb8a4e20": Phase="Pending", Reason="", readiness=false. Elapsed: 10.05048034s
Feb  3 14:35:41.040: INFO: Pod "pod-secrets-0703fdc0-bdc1-4eec-88e0-f7e0eb8a4e20": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.060379097s
STEP: Saw pod success
Feb  3 14:35:41.040: INFO: Pod "pod-secrets-0703fdc0-bdc1-4eec-88e0-f7e0eb8a4e20" satisfied condition "success or failure"
Feb  3 14:35:41.044: INFO: Trying to get logs from node iruya-node pod pod-secrets-0703fdc0-bdc1-4eec-88e0-f7e0eb8a4e20 container secret-volume-test: 
STEP: delete the pod
Feb  3 14:35:41.154: INFO: Waiting for pod pod-secrets-0703fdc0-bdc1-4eec-88e0-f7e0eb8a4e20 to disappear
Feb  3 14:35:41.224: INFO: Pod pod-secrets-0703fdc0-bdc1-4eec-88e0-f7e0eb8a4e20 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 14:35:41.225: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-749" for this suite.
Feb  3 14:35:47.266: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 14:35:47.439: INFO: namespace secrets-749 deletion completed in 6.20309003s

• [SLOW TEST:18.645 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 14:35:47.439: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-97ffa508-7f22-42ff-a154-c3180ec5c667
STEP: Creating a pod to test consume configMaps
Feb  3 14:35:47.644: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e9db9b94-e2e5-4f20-9768-8adae5b723af" in namespace "projected-5617" to be "success or failure"
Feb  3 14:35:47.657: INFO: Pod "pod-projected-configmaps-e9db9b94-e2e5-4f20-9768-8adae5b723af": Phase="Pending", Reason="", readiness=false. Elapsed: 12.891929ms
Feb  3 14:35:49.666: INFO: Pod "pod-projected-configmaps-e9db9b94-e2e5-4f20-9768-8adae5b723af": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022091236s
Feb  3 14:35:51.680: INFO: Pod "pod-projected-configmaps-e9db9b94-e2e5-4f20-9768-8adae5b723af": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036336607s
Feb  3 14:35:53.692: INFO: Pod "pod-projected-configmaps-e9db9b94-e2e5-4f20-9768-8adae5b723af": Phase="Pending", Reason="", readiness=false. Elapsed: 6.047830878s
Feb  3 14:35:55.700: INFO: Pod "pod-projected-configmaps-e9db9b94-e2e5-4f20-9768-8adae5b723af": Phase="Pending", Reason="", readiness=false. Elapsed: 8.05607707s
Feb  3 14:35:57.742: INFO: Pod "pod-projected-configmaps-e9db9b94-e2e5-4f20-9768-8adae5b723af": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.097679442s
STEP: Saw pod success
Feb  3 14:35:57.742: INFO: Pod "pod-projected-configmaps-e9db9b94-e2e5-4f20-9768-8adae5b723af" satisfied condition "success or failure"
Feb  3 14:35:57.746: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-e9db9b94-e2e5-4f20-9768-8adae5b723af container projected-configmap-volume-test: 
STEP: delete the pod
Feb  3 14:35:57.851: INFO: Waiting for pod pod-projected-configmaps-e9db9b94-e2e5-4f20-9768-8adae5b723af to disappear
Feb  3 14:35:57.856: INFO: Pod pod-projected-configmaps-e9db9b94-e2e5-4f20-9768-8adae5b723af no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 14:35:57.856: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5617" for this suite.
Feb  3 14:36:03.949: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 14:36:04.071: INFO: namespace projected-5617 deletion completed in 6.205861521s

• [SLOW TEST:16.632 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 14:36:04.071: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb  3 14:36:04.159: INFO: Creating ReplicaSet my-hostname-basic-049d3c29-d9e9-49f7-8828-92c9e89e6a71
Feb  3 14:36:04.173: INFO: Pod name my-hostname-basic-049d3c29-d9e9-49f7-8828-92c9e89e6a71: Found 0 pods out of 1
Feb  3 14:36:09.188: INFO: Pod name my-hostname-basic-049d3c29-d9e9-49f7-8828-92c9e89e6a71: Found 1 pods out of 1
Feb  3 14:36:09.188: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-049d3c29-d9e9-49f7-8828-92c9e89e6a71" is running
Feb  3 14:36:13.209: INFO: Pod "my-hostname-basic-049d3c29-d9e9-49f7-8828-92c9e89e6a71-jdfsd" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-03 14:36:04 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-03 14:36:04 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-049d3c29-d9e9-49f7-8828-92c9e89e6a71]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-03 14:36:04 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-049d3c29-d9e9-49f7-8828-92c9e89e6a71]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-03 14:36:04 +0000 UTC Reason: Message:}])
Feb  3 14:36:13.210: INFO: Trying to dial the pod
Feb  3 14:36:18.236: INFO: Controller my-hostname-basic-049d3c29-d9e9-49f7-8828-92c9e89e6a71: Got expected result from replica 1 [my-hostname-basic-049d3c29-d9e9-49f7-8828-92c9e89e6a71-jdfsd]: "my-hostname-basic-049d3c29-d9e9-49f7-8828-92c9e89e6a71-jdfsd", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 14:36:18.236: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-2178" for this suite.
Feb  3 14:36:24.312: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 14:36:24.422: INFO: namespace replicaset-2178 deletion completed in 6.18001135s

• [SLOW TEST:20.350 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 14:36:24.422: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Feb  3 14:36:35.166: INFO: Successfully updated pod "labelsupdatec2699a49-c102-4d22-875a-fc86e2017a8c"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 14:36:39.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-930" for this suite.
Feb  3 14:37:01.300: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 14:37:01.405: INFO: namespace projected-930 deletion completed in 22.137956556s

• [SLOW TEST:36.983 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 14:37:01.406: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Feb  3 14:37:10.231: INFO: Successfully updated pod "labelsupdate6a659f49-ad70-47b9-82f8-35059b85a9cd"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 14:37:12.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3115" for this suite.
Feb  3 14:37:34.723: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 14:37:34.851: INFO: namespace downward-api-3115 deletion completed in 22.183228238s

• [SLOW TEST:33.445 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl rolling-update 
  should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 14:37:34.852: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1516
[It] should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb  3 14:37:34.953: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-3466'
Feb  3 14:37:37.310: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb  3 14:37:37.310: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
STEP: rolling-update to same image controller
Feb  3 14:37:37.402: INFO: scanned /root for discovery docs: 
Feb  3 14:37:37.402: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-3466'
Feb  3 14:37:58.580: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Feb  3 14:37:58.581: INFO: stdout: "Created e2e-test-nginx-rc-400f63f2fc96d1d4e23d535642e26537\nScaling up e2e-test-nginx-rc-400f63f2fc96d1d4e23d535642e26537 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-400f63f2fc96d1d4e23d535642e26537 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-400f63f2fc96d1d4e23d535642e26537 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
Feb  3 14:37:58.581: INFO: stdout: "Created e2e-test-nginx-rc-400f63f2fc96d1d4e23d535642e26537\nScaling up e2e-test-nginx-rc-400f63f2fc96d1d4e23d535642e26537 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-400f63f2fc96d1d4e23d535642e26537 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-400f63f2fc96d1d4e23d535642e26537 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up.
Feb  3 14:37:58.582: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-3466'
Feb  3 14:37:59.006: INFO: stderr: ""
Feb  3 14:37:59.007: INFO: stdout: "e2e-test-nginx-rc-400f63f2fc96d1d4e23d535642e26537-d6vbf e2e-test-nginx-rc-zfw75 "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Feb  3 14:38:04.007: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-3466'
Feb  3 14:38:04.223: INFO: stderr: ""
Feb  3 14:38:04.223: INFO: stdout: "e2e-test-nginx-rc-400f63f2fc96d1d4e23d535642e26537-d6vbf e2e-test-nginx-rc-zfw75 "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Feb  3 14:38:09.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-3466'
Feb  3 14:38:09.476: INFO: stderr: ""
Feb  3 14:38:09.477: INFO: stdout: "e2e-test-nginx-rc-400f63f2fc96d1d4e23d535642e26537-d6vbf e2e-test-nginx-rc-zfw75 "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Feb  3 14:38:14.479: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-3466'
Feb  3 14:38:14.641: INFO: stderr: ""
Feb  3 14:38:14.641: INFO: stdout: "e2e-test-nginx-rc-400f63f2fc96d1d4e23d535642e26537-d6vbf e2e-test-nginx-rc-zfw75 "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Feb  3 14:38:19.642: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-3466'
Feb  3 14:38:19.872: INFO: stderr: ""
Feb  3 14:38:19.872: INFO: stdout: "e2e-test-nginx-rc-400f63f2fc96d1d4e23d535642e26537-d6vbf e2e-test-nginx-rc-zfw75 "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Feb  3 14:38:24.873: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-3466'
Feb  3 14:38:25.037: INFO: stderr: ""
Feb  3 14:38:25.037: INFO: stdout: "e2e-test-nginx-rc-400f63f2fc96d1d4e23d535642e26537-d6vbf e2e-test-nginx-rc-zfw75 "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Feb  3 14:38:30.037: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-3466'
Feb  3 14:38:30.213: INFO: stderr: ""
Feb  3 14:38:30.214: INFO: stdout: "e2e-test-nginx-rc-400f63f2fc96d1d4e23d535642e26537-d6vbf e2e-test-nginx-rc-zfw75 "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Feb  3 14:38:35.215: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-3466'
Feb  3 14:38:35.424: INFO: stderr: ""
Feb  3 14:38:35.424: INFO: stdout: "e2e-test-nginx-rc-400f63f2fc96d1d4e23d535642e26537-d6vbf e2e-test-nginx-rc-zfw75 "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Feb  3 14:38:40.425: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-3466'
Feb  3 14:38:40.675: INFO: stderr: ""
Feb  3 14:38:40.675: INFO: stdout: "e2e-test-nginx-rc-400f63f2fc96d1d4e23d535642e26537-d6vbf e2e-test-nginx-rc-zfw75 "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Feb  3 14:38:45.676: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-3466'
Feb  3 14:38:45.876: INFO: stderr: ""
Feb  3 14:38:45.877: INFO: stdout: "e2e-test-nginx-rc-400f63f2fc96d1d4e23d535642e26537-d6vbf e2e-test-nginx-rc-zfw75 "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Feb  3 14:38:50.878: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-3466'
Feb  3 14:38:50.996: INFO: stderr: ""
Feb  3 14:38:50.996: INFO: stdout: "e2e-test-nginx-rc-400f63f2fc96d1d4e23d535642e26537-d6vbf e2e-test-nginx-rc-zfw75 "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Feb  3 14:38:55.997: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-3466'
Feb  3 14:38:56.188: INFO: stderr: ""
Feb  3 14:38:56.188: INFO: stdout: "e2e-test-nginx-rc-400f63f2fc96d1d4e23d535642e26537-d6vbf e2e-test-nginx-rc-zfw75 "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Feb  3 14:39:01.189: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-3466'
Feb  3 14:39:01.293: INFO: stderr: ""
Feb  3 14:39:01.293: INFO: stdout: "e2e-test-nginx-rc-400f63f2fc96d1d4e23d535642e26537-d6vbf e2e-test-nginx-rc-zfw75 "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Feb  3 14:39:06.294: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-3466'
Feb  3 14:39:06.531: INFO: stderr: ""
Feb  3 14:39:06.532: INFO: stdout: "e2e-test-nginx-rc-400f63f2fc96d1d4e23d535642e26537-d6vbf e2e-test-nginx-rc-zfw75 "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Feb  3 14:39:11.532: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-3466'
Feb  3 14:39:11.686: INFO: stderr: ""
Feb  3 14:39:11.686: INFO: stdout: "e2e-test-nginx-rc-400f63f2fc96d1d4e23d535642e26537-d6vbf e2e-test-nginx-rc-zfw75 "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Feb  3 14:39:16.687: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-3466'
Feb  3 14:39:16.856: INFO: stderr: ""
Feb  3 14:39:16.856: INFO: stdout: "e2e-test-nginx-rc-400f63f2fc96d1d4e23d535642e26537-d6vbf e2e-test-nginx-rc-zfw75 "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Feb  3 14:39:21.857: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-3466'
Feb  3 14:39:22.026: INFO: stderr: ""
Feb  3 14:39:22.027: INFO: stdout: "e2e-test-nginx-rc-400f63f2fc96d1d4e23d535642e26537-d6vbf "
Feb  3 14:39:22.027: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-400f63f2fc96d1d4e23d535642e26537-d6vbf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3466'
Feb  3 14:39:22.203: INFO: stderr: ""
Feb  3 14:39:22.204: INFO: stdout: "true"
Feb  3 14:39:22.204: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-400f63f2fc96d1d4e23d535642e26537-d6vbf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3466'
Feb  3 14:39:22.321: INFO: stderr: ""
Feb  3 14:39:22.322: INFO: stdout: "docker.io/library/nginx:1.14-alpine"
Feb  3 14:39:22.322: INFO: e2e-test-nginx-rc-400f63f2fc96d1d4e23d535642e26537-d6vbf is verified up and running
[AfterEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1522
Feb  3 14:39:22.322: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-3466'
Feb  3 14:39:22.445: INFO: stderr: ""
Feb  3 14:39:22.445: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 14:39:22.445: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3466" for this suite.
Feb  3 14:39:44.476: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 14:39:44.703: INFO: namespace kubectl-3466 deletion completed in 22.25238747s

• [SLOW TEST:129.851 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support rolling-update to same image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 14:39:44.703: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb  3 14:39:44.804: INFO: Create a RollingUpdate DaemonSet
Feb  3 14:39:44.814: INFO: Check that daemon pods launch on every node of the cluster
Feb  3 14:39:44.827: INFO: Number of nodes with available pods: 0
Feb  3 14:39:44.827: INFO: Node iruya-node is running more than one daemon pod
Feb  3 14:39:46.063: INFO: Number of nodes with available pods: 0
Feb  3 14:39:46.064: INFO: Node iruya-node is running more than one daemon pod
Feb  3 14:39:47.326: INFO: Number of nodes with available pods: 0
Feb  3 14:39:47.326: INFO: Node iruya-node is running more than one daemon pod
Feb  3 14:39:47.845: INFO: Number of nodes with available pods: 0
Feb  3 14:39:47.845: INFO: Node iruya-node is running more than one daemon pod
Feb  3 14:39:48.838: INFO: Number of nodes with available pods: 0
Feb  3 14:39:48.838: INFO: Node iruya-node is running more than one daemon pod
Feb  3 14:39:50.364: INFO: Number of nodes with available pods: 0
Feb  3 14:39:50.364: INFO: Node iruya-node is running more than one daemon pod
Feb  3 14:39:51.162: INFO: Number of nodes with available pods: 0
Feb  3 14:39:51.162: INFO: Node iruya-node is running more than one daemon pod
Feb  3 14:39:51.846: INFO: Number of nodes with available pods: 0
Feb  3 14:39:51.846: INFO: Node iruya-node is running more than one daemon pod
Feb  3 14:39:52.987: INFO: Number of nodes with available pods: 0
Feb  3 14:39:52.987: INFO: Node iruya-node is running more than one daemon pod
Feb  3 14:39:53.851: INFO: Number of nodes with available pods: 0
Feb  3 14:39:53.851: INFO: Node iruya-node is running more than one daemon pod
Feb  3 14:39:54.871: INFO: Number of nodes with available pods: 1
Feb  3 14:39:54.871: INFO: Node iruya-node is running more than one daemon pod
Feb  3 14:39:55.844: INFO: Number of nodes with available pods: 2
Feb  3 14:39:55.844: INFO: Number of running nodes: 2, number of available pods: 2
Feb  3 14:39:55.844: INFO: Update the DaemonSet to trigger a rollout
Feb  3 14:39:55.860: INFO: Updating DaemonSet daemon-set
Feb  3 14:40:06.968: INFO: Roll back the DaemonSet before rollout is complete
Feb  3 14:40:07.006: INFO: Updating DaemonSet daemon-set
Feb  3 14:40:07.006: INFO: Make sure DaemonSet rollback is complete
Feb  3 14:40:07.041: INFO: Wrong image for pod: daemon-set-p8cs5. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Feb  3 14:40:07.041: INFO: Pod daemon-set-p8cs5 is not available
Feb  3 14:40:08.106: INFO: Wrong image for pod: daemon-set-p8cs5. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Feb  3 14:40:08.106: INFO: Pod daemon-set-p8cs5 is not available
Feb  3 14:40:09.097: INFO: Wrong image for pod: daemon-set-p8cs5. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Feb  3 14:40:09.097: INFO: Pod daemon-set-p8cs5 is not available
Feb  3 14:40:10.106: INFO: Wrong image for pod: daemon-set-p8cs5. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Feb  3 14:40:10.106: INFO: Pod daemon-set-p8cs5 is not available
Feb  3 14:40:11.098: INFO: Wrong image for pod: daemon-set-p8cs5. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Feb  3 14:40:11.098: INFO: Pod daemon-set-p8cs5 is not available
Feb  3 14:40:12.097: INFO: Wrong image for pod: daemon-set-p8cs5. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Feb  3 14:40:12.097: INFO: Pod daemon-set-p8cs5 is not available
Feb  3 14:40:13.099: INFO: Wrong image for pod: daemon-set-p8cs5. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Feb  3 14:40:13.099: INFO: Pod daemon-set-p8cs5 is not available
Feb  3 14:40:14.099: INFO: Wrong image for pod: daemon-set-p8cs5. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Feb  3 14:40:14.100: INFO: Pod daemon-set-p8cs5 is not available
Feb  3 14:40:15.102: INFO: Wrong image for pod: daemon-set-p8cs5. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Feb  3 14:40:15.103: INFO: Pod daemon-set-p8cs5 is not available
Feb  3 14:40:16.103: INFO: Wrong image for pod: daemon-set-p8cs5. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Feb  3 14:40:16.103: INFO: Pod daemon-set-p8cs5 is not available
Feb  3 14:40:17.100: INFO: Pod daemon-set-qdhzf is not available
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3799, will wait for the garbage collector to delete the pods
Feb  3 14:40:17.195: INFO: Deleting DaemonSet.extensions daemon-set took: 19.380278ms
Feb  3 14:40:17.495: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.503825ms
Feb  3 14:40:24.400: INFO: Number of nodes with available pods: 0
Feb  3 14:40:24.401: INFO: Number of running nodes: 0, number of available pods: 0
Feb  3 14:40:24.403: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3799/daemonsets","resourceVersion":"22953848"},"items":null}

Feb  3 14:40:24.406: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3799/pods","resourceVersion":"22953848"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 14:40:24.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-3799" for this suite.
Feb  3 14:40:30.481: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 14:40:30.596: INFO: namespace daemonsets-3799 deletion completed in 6.172694332s

• [SLOW TEST:45.893 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 14:40:30.596: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Feb  3 14:40:39.032: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-1961d6f4-432b-484b-a405-82a18467670d,GenerateName:,Namespace:events-6586,SelfLink:/api/v1/namespaces/events-6586/pods/send-events-1961d6f4-432b-484b-a405-82a18467670d,UID:e7f62aeb-17b5-4a32-b634-3d882ac6ed5b,ResourceVersion:22953900,Generation:0,CreationTimestamp:2020-02-03 14:40:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 840332215,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-s5645 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-s5645,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] []  [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-s5645 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00304c0f0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00304c110}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 14:40:31 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 14:40:37 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 14:40:37 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 14:40:30 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2020-02-03 14:40:31 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-02-03 14:40:37 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 docker-pullable://gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 docker://3ebac7c6b3f189e0d2647e1d5135db762edd17239deae154d6e584fa08b6cc72}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}

STEP: checking for scheduler event about the pod
Feb  3 14:40:41.049: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Feb  3 14:40:43.068: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 14:40:43.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-6586" for this suite.
Feb  3 14:41:29.239: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 14:41:29.335: INFO: namespace events-6586 deletion completed in 46.162500134s

• [SLOW TEST:58.739 seconds]
[k8s.io] [sig-node] Events
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 14:41:29.335: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-4f90eb50-1bc3-4c0b-b28a-cc3937f1839a
STEP: Creating a pod to test consume configMaps
Feb  3 14:41:29.421: INFO: Waiting up to 5m0s for pod "pod-configmaps-67b3166c-cfa2-4a4d-b9e3-e1466d9035bf" in namespace "configmap-7733" to be "success or failure"
Feb  3 14:41:29.526: INFO: Pod "pod-configmaps-67b3166c-cfa2-4a4d-b9e3-e1466d9035bf": Phase="Pending", Reason="", readiness=false. Elapsed: 104.760445ms
Feb  3 14:41:31.537: INFO: Pod "pod-configmaps-67b3166c-cfa2-4a4d-b9e3-e1466d9035bf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.115434349s
Feb  3 14:41:33.545: INFO: Pod "pod-configmaps-67b3166c-cfa2-4a4d-b9e3-e1466d9035bf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.123813754s
Feb  3 14:41:35.559: INFO: Pod "pod-configmaps-67b3166c-cfa2-4a4d-b9e3-e1466d9035bf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.137933074s
Feb  3 14:41:37.567: INFO: Pod "pod-configmaps-67b3166c-cfa2-4a4d-b9e3-e1466d9035bf": Phase="Pending", Reason="", readiness=false. Elapsed: 8.145317933s
Feb  3 14:41:39.575: INFO: Pod "pod-configmaps-67b3166c-cfa2-4a4d-b9e3-e1466d9035bf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.153517036s
STEP: Saw pod success
Feb  3 14:41:39.575: INFO: Pod "pod-configmaps-67b3166c-cfa2-4a4d-b9e3-e1466d9035bf" satisfied condition "success or failure"
Feb  3 14:41:39.580: INFO: Trying to get logs from node iruya-node pod pod-configmaps-67b3166c-cfa2-4a4d-b9e3-e1466d9035bf container configmap-volume-test: 
STEP: delete the pod
Feb  3 14:41:39.651: INFO: Waiting for pod pod-configmaps-67b3166c-cfa2-4a4d-b9e3-e1466d9035bf to disappear
Feb  3 14:41:39.656: INFO: Pod pod-configmaps-67b3166c-cfa2-4a4d-b9e3-e1466d9035bf no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 14:41:39.656: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7733" for this suite.
Feb  3 14:41:45.727: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 14:41:45.889: INFO: namespace configmap-7733 deletion completed in 6.185515125s

• [SLOW TEST:16.554 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 14:41:45.890: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-zkxzc in namespace proxy-6758
I0203 14:41:46.049964       8 runners.go:180] Created replication controller with name: proxy-service-zkxzc, namespace: proxy-6758, replica count: 1
I0203 14:41:47.101087       8 runners.go:180] proxy-service-zkxzc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0203 14:41:48.101766       8 runners.go:180] proxy-service-zkxzc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0203 14:41:49.102292       8 runners.go:180] proxy-service-zkxzc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0203 14:41:50.102824       8 runners.go:180] proxy-service-zkxzc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0203 14:41:51.103820       8 runners.go:180] proxy-service-zkxzc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0203 14:41:52.104366       8 runners.go:180] proxy-service-zkxzc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0203 14:41:53.104821       8 runners.go:180] proxy-service-zkxzc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0203 14:41:54.105450       8 runners.go:180] proxy-service-zkxzc Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0203 14:41:55.105988       8 runners.go:180] proxy-service-zkxzc Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0203 14:41:56.106542       8 runners.go:180] proxy-service-zkxzc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Feb  3 14:41:56.119: INFO: setup took 10.125815895s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Feb  3 14:41:56.155: INFO: (0) /api/v1/namespaces/proxy-6758/pods/proxy-service-zkxzc-bxh7c:162/proxy/: bar (200; 35.76675ms)
Feb  3 14:41:56.155: INFO: (0) /api/v1/namespaces/proxy-6758/pods/proxy-service-zkxzc-bxh7c/proxy/: test (200; 35.351275ms)
Feb  3 14:41:56.155: INFO: (0) /api/v1/namespaces/proxy-6758/pods/http:proxy-service-zkxzc-bxh7c:160/proxy/: foo (200; 36.010977ms)
Feb  3 14:41:56.155: INFO: (0) /api/v1/namespaces/proxy-6758/pods/proxy-service-zkxzc-bxh7c:1080/proxy/: test<... (200; 36.0498ms)
Feb  3 14:41:56.155: INFO: (0) /api/v1/namespaces/proxy-6758/pods/http:proxy-service-zkxzc-bxh7c:1080/proxy/: ... (200; 35.674422ms)
Feb  3 14:41:56.155: INFO: (0) /api/v1/namespaces/proxy-6758/pods/http:proxy-service-zkxzc-bxh7c:162/proxy/: bar (200; 35.85365ms)
Feb  3 14:41:56.170: INFO: (0) /api/v1/namespaces/proxy-6758/services/http:proxy-service-zkxzc:portname2/proxy/: bar (200; 50.825028ms)
Feb  3 14:41:56.171: INFO: (0) /api/v1/namespaces/proxy-6758/pods/proxy-service-zkxzc-bxh7c:160/proxy/: foo (200; 51.079336ms)
Feb  3 14:41:56.176: INFO: (0) /api/v1/namespaces/proxy-6758/services/http:proxy-service-zkxzc:portname1/proxy/: foo (200; 55.977754ms)
Feb  3 14:41:56.176: INFO: (0) /api/v1/namespaces/proxy-6758/services/proxy-service-zkxzc:portname2/proxy/: bar (200; 57.023532ms)
Feb  3 14:41:56.180: INFO: (0) /api/v1/namespaces/proxy-6758/services/proxy-service-zkxzc:portname1/proxy/: foo (200; 60.357142ms)
Feb  3 14:41:56.180: INFO: (0) /api/v1/namespaces/proxy-6758/pods/https:proxy-service-zkxzc-bxh7c:462/proxy/: tls qux (200; 61.054305ms)
Feb  3 14:41:56.181: INFO: (0) /api/v1/namespaces/proxy-6758/pods/https:proxy-service-zkxzc-bxh7c:460/proxy/: tls baz (200; 61.195459ms)
Feb  3 14:41:56.181: INFO: (0) /api/v1/namespaces/proxy-6758/pods/https:proxy-service-zkxzc-bxh7c:443/proxy/: test (200; 26.198277ms)
Feb  3 14:41:56.213: INFO: (1) /api/v1/namespaces/proxy-6758/pods/proxy-service-zkxzc-bxh7c:1080/proxy/: test<... (200; 25.851357ms)
Feb  3 14:41:56.214: INFO: (1) /api/v1/namespaces/proxy-6758/pods/http:proxy-service-zkxzc-bxh7c:1080/proxy/: ... (200; 27.753033ms)
Feb  3 14:41:56.221: INFO: (2) /api/v1/namespaces/proxy-6758/pods/https:proxy-service-zkxzc-bxh7c:462/proxy/: tls qux (200; 6.980767ms)
Feb  3 14:41:56.221: INFO: (2) /api/v1/namespaces/proxy-6758/pods/proxy-service-zkxzc-bxh7c:162/proxy/: bar (200; 7.038866ms)
Feb  3 14:41:56.226: INFO: (2) /api/v1/namespaces/proxy-6758/services/https:proxy-service-zkxzc:tlsportname1/proxy/: tls baz (200; 11.56556ms)
Feb  3 14:41:56.226: INFO: (2) /api/v1/namespaces/proxy-6758/pods/http:proxy-service-zkxzc-bxh7c:162/proxy/: bar (200; 11.437887ms)
Feb  3 14:41:56.226: INFO: (2) /api/v1/namespaces/proxy-6758/pods/http:proxy-service-zkxzc-bxh7c:160/proxy/: foo (200; 11.677571ms)
Feb  3 14:41:56.226: INFO: (2) /api/v1/namespaces/proxy-6758/pods/https:proxy-service-zkxzc-bxh7c:460/proxy/: tls baz (200; 11.684169ms)
Feb  3 14:41:56.226: INFO: (2) /api/v1/namespaces/proxy-6758/pods/https:proxy-service-zkxzc-bxh7c:443/proxy/: test (200; 12.136914ms)
Feb  3 14:41:56.227: INFO: (2) /api/v1/namespaces/proxy-6758/services/proxy-service-zkxzc:portname1/proxy/: foo (200; 12.872289ms)
Feb  3 14:41:56.228: INFO: (2) /api/v1/namespaces/proxy-6758/pods/http:proxy-service-zkxzc-bxh7c:1080/proxy/: ... (200; 13.246568ms)
Feb  3 14:41:56.228: INFO: (2) /api/v1/namespaces/proxy-6758/pods/proxy-service-zkxzc-bxh7c:1080/proxy/: test<... (200; 13.559867ms)
Feb  3 14:41:56.231: INFO: (2) /api/v1/namespaces/proxy-6758/services/http:proxy-service-zkxzc:portname2/proxy/: bar (200; 16.537014ms)
Feb  3 14:41:56.231: INFO: (2) /api/v1/namespaces/proxy-6758/services/http:proxy-service-zkxzc:portname1/proxy/: foo (200; 16.656631ms)
Feb  3 14:41:56.232: INFO: (2) /api/v1/namespaces/proxy-6758/services/https:proxy-service-zkxzc:tlsportname2/proxy/: tls qux (200; 17.657091ms)
Feb  3 14:41:56.233: INFO: (2) /api/v1/namespaces/proxy-6758/services/proxy-service-zkxzc:portname2/proxy/: bar (200; 18.922793ms)
Feb  3 14:41:56.245: INFO: (3) /api/v1/namespaces/proxy-6758/pods/proxy-service-zkxzc-bxh7c/proxy/: test (200; 11.493062ms)
Feb  3 14:41:56.245: INFO: (3) /api/v1/namespaces/proxy-6758/pods/https:proxy-service-zkxzc-bxh7c:460/proxy/: tls baz (200; 12.064376ms)
Feb  3 14:41:56.246: INFO: (3) /api/v1/namespaces/proxy-6758/pods/http:proxy-service-zkxzc-bxh7c:1080/proxy/: ... (200; 12.671813ms)
Feb  3 14:41:56.247: INFO: (3) /api/v1/namespaces/proxy-6758/pods/https:proxy-service-zkxzc-bxh7c:443/proxy/: test<... (200; 13.744855ms)
Feb  3 14:41:56.248: INFO: (3) /api/v1/namespaces/proxy-6758/pods/http:proxy-service-zkxzc-bxh7c:160/proxy/: foo (200; 14.700366ms)
Feb  3 14:41:56.248: INFO: (3) /api/v1/namespaces/proxy-6758/pods/proxy-service-zkxzc-bxh7c:160/proxy/: foo (200; 14.842714ms)
Feb  3 14:41:56.252: INFO: (3) /api/v1/namespaces/proxy-6758/pods/http:proxy-service-zkxzc-bxh7c:162/proxy/: bar (200; 18.160228ms)
Feb  3 14:41:56.254: INFO: (3) /api/v1/namespaces/proxy-6758/services/https:proxy-service-zkxzc:tlsportname1/proxy/: tls baz (200; 21.023398ms)
Feb  3 14:41:56.255: INFO: (3) /api/v1/namespaces/proxy-6758/services/https:proxy-service-zkxzc:tlsportname2/proxy/: tls qux (200; 21.3039ms)
Feb  3 14:41:56.255: INFO: (3) /api/v1/namespaces/proxy-6758/services/proxy-service-zkxzc:portname1/proxy/: foo (200; 21.490972ms)
Feb  3 14:41:56.255: INFO: (3) /api/v1/namespaces/proxy-6758/services/proxy-service-zkxzc:portname2/proxy/: bar (200; 21.893467ms)
Feb  3 14:41:56.259: INFO: (3) /api/v1/namespaces/proxy-6758/pods/proxy-service-zkxzc-bxh7c:162/proxy/: bar (200; 25.798248ms)
Feb  3 14:41:56.259: INFO: (3) /api/v1/namespaces/proxy-6758/services/http:proxy-service-zkxzc:portname2/proxy/: bar (200; 25.778133ms)
Feb  3 14:41:56.260: INFO: (3) /api/v1/namespaces/proxy-6758/pods/https:proxy-service-zkxzc-bxh7c:462/proxy/: tls qux (200; 26.100463ms)
Feb  3 14:41:56.260: INFO: (3) /api/v1/namespaces/proxy-6758/services/http:proxy-service-zkxzc:portname1/proxy/: foo (200; 26.257236ms)
Feb  3 14:41:56.273: INFO: (4) /api/v1/namespaces/proxy-6758/pods/proxy-service-zkxzc-bxh7c:160/proxy/: foo (200; 13.168756ms)
Feb  3 14:41:56.273: INFO: (4) /api/v1/namespaces/proxy-6758/services/http:proxy-service-zkxzc:portname1/proxy/: foo (200; 13.223161ms)
Feb  3 14:41:56.273: INFO: (4) /api/v1/namespaces/proxy-6758/pods/http:proxy-service-zkxzc-bxh7c:1080/proxy/: ... (200; 13.426977ms)
Feb  3 14:41:56.274: INFO: (4) /api/v1/namespaces/proxy-6758/pods/https:proxy-service-zkxzc-bxh7c:460/proxy/: tls baz (200; 13.78017ms)
Feb  3 14:41:56.274: INFO: (4) /api/v1/namespaces/proxy-6758/services/http:proxy-service-zkxzc:portname2/proxy/: bar (200; 13.798487ms)
Feb  3 14:41:56.275: INFO: (4) /api/v1/namespaces/proxy-6758/services/https:proxy-service-zkxzc:tlsportname2/proxy/: tls qux (200; 14.824959ms)
Feb  3 14:41:56.275: INFO: (4) /api/v1/namespaces/proxy-6758/services/https:proxy-service-zkxzc:tlsportname1/proxy/: tls baz (200; 15.022512ms)
Feb  3 14:41:56.276: INFO: (4) /api/v1/namespaces/proxy-6758/pods/http:proxy-service-zkxzc-bxh7c:160/proxy/: foo (200; 16.474398ms)
Feb  3 14:41:56.277: INFO: (4) /api/v1/namespaces/proxy-6758/pods/proxy-service-zkxzc-bxh7c:162/proxy/: bar (200; 17.041445ms)
Feb  3 14:41:56.277: INFO: (4) /api/v1/namespaces/proxy-6758/pods/https:proxy-service-zkxzc-bxh7c:443/proxy/: test (200; 17.498953ms)
Feb  3 14:41:56.278: INFO: (4) /api/v1/namespaces/proxy-6758/services/proxy-service-zkxzc:portname1/proxy/: foo (200; 17.80153ms)
Feb  3 14:41:56.278: INFO: (4) /api/v1/namespaces/proxy-6758/pods/proxy-service-zkxzc-bxh7c:1080/proxy/: test<... (200; 18.145759ms)
Feb  3 14:41:56.278: INFO: (4) /api/v1/namespaces/proxy-6758/services/proxy-service-zkxzc:portname2/proxy/: bar (200; 18.47913ms)
Feb  3 14:41:56.278: INFO: (4) /api/v1/namespaces/proxy-6758/pods/http:proxy-service-zkxzc-bxh7c:162/proxy/: bar (200; 18.54905ms)
Feb  3 14:41:56.279: INFO: (4) /api/v1/namespaces/proxy-6758/pods/https:proxy-service-zkxzc-bxh7c:462/proxy/: tls qux (200; 19.092471ms)
Feb  3 14:41:56.285: INFO: (5) /api/v1/namespaces/proxy-6758/pods/http:proxy-service-zkxzc-bxh7c:160/proxy/: foo (200; 5.715135ms)
Feb  3 14:41:56.285: INFO: (5) /api/v1/namespaces/proxy-6758/pods/http:proxy-service-zkxzc-bxh7c:1080/proxy/: ... (200; 6.200871ms)
Feb  3 14:41:56.285: INFO: (5) /api/v1/namespaces/proxy-6758/pods/proxy-service-zkxzc-bxh7c:160/proxy/: foo (200; 6.336612ms)
Feb  3 14:41:56.288: INFO: (5) /api/v1/namespaces/proxy-6758/pods/proxy-service-zkxzc-bxh7c:162/proxy/: bar (200; 8.852257ms)
Feb  3 14:41:56.288: INFO: (5) /api/v1/namespaces/proxy-6758/services/proxy-service-zkxzc:portname2/proxy/: bar (200; 9.303985ms)
Feb  3 14:41:56.289: INFO: (5) /api/v1/namespaces/proxy-6758/pods/https:proxy-service-zkxzc-bxh7c:460/proxy/: tls baz (200; 9.500946ms)
Feb  3 14:41:56.289: INFO: (5) /api/v1/namespaces/proxy-6758/pods/proxy-service-zkxzc-bxh7c/proxy/: test (200; 10.321471ms)
Feb  3 14:41:56.290: INFO: (5) /api/v1/namespaces/proxy-6758/pods/proxy-service-zkxzc-bxh7c:1080/proxy/: test<... (200; 11.117152ms)
Feb  3 14:41:56.293: INFO: (5) /api/v1/namespaces/proxy-6758/services/proxy-service-zkxzc:portname1/proxy/: foo (200; 13.445668ms)
Feb  3 14:41:56.293: INFO: (5) /api/v1/namespaces/proxy-6758/services/https:proxy-service-zkxzc:tlsportname1/proxy/: tls baz (200; 13.654227ms)
Feb  3 14:41:56.293: INFO: (5) /api/v1/namespaces/proxy-6758/services/http:proxy-service-zkxzc:portname1/proxy/: foo (200; 14.356158ms)
Feb  3 14:41:56.294: INFO: (5) /api/v1/namespaces/proxy-6758/services/http:proxy-service-zkxzc:portname2/proxy/: bar (200; 14.913224ms)
Feb  3 14:41:56.294: INFO: (5) /api/v1/namespaces/proxy-6758/pods/https:proxy-service-zkxzc-bxh7c:462/proxy/: tls qux (200; 14.975354ms)
Feb  3 14:41:56.295: INFO: (5) /api/v1/namespaces/proxy-6758/pods/http:proxy-service-zkxzc-bxh7c:162/proxy/: bar (200; 15.484553ms)
Feb  3 14:41:56.295: INFO: (5) /api/v1/namespaces/proxy-6758/pods/https:proxy-service-zkxzc-bxh7c:443/proxy/: ... (200; 12.707051ms)
Feb  3 14:41:56.309: INFO: (6) /api/v1/namespaces/proxy-6758/pods/http:proxy-service-zkxzc-bxh7c:160/proxy/: foo (200; 12.911771ms)
Feb  3 14:41:56.309: INFO: (6) /api/v1/namespaces/proxy-6758/services/proxy-service-zkxzc:portname1/proxy/: foo (200; 13.029836ms)
Feb  3 14:41:56.309: INFO: (6) /api/v1/namespaces/proxy-6758/services/http:proxy-service-zkxzc:portname2/proxy/: bar (200; 12.961929ms)
Feb  3 14:41:56.310: INFO: (6) /api/v1/namespaces/proxy-6758/services/proxy-service-zkxzc:portname2/proxy/: bar (200; 13.416166ms)
Feb  3 14:41:56.310: INFO: (6) /api/v1/namespaces/proxy-6758/services/https:proxy-service-zkxzc:tlsportname1/proxy/: tls baz (200; 14.212861ms)
Feb  3 14:41:56.310: INFO: (6) /api/v1/namespaces/proxy-6758/pods/proxy-service-zkxzc-bxh7c:1080/proxy/: test<... (200; 14.107682ms)
Feb  3 14:41:56.310: INFO: (6) /api/v1/namespaces/proxy-6758/services/https:proxy-service-zkxzc:tlsportname2/proxy/: tls qux (200; 14.209522ms)
Feb  3 14:41:56.311: INFO: (6) /api/v1/namespaces/proxy-6758/pods/https:proxy-service-zkxzc-bxh7c:443/proxy/: test (200; 14.880369ms)
Feb  3 14:41:56.311: INFO: (6) /api/v1/namespaces/proxy-6758/services/http:proxy-service-zkxzc:portname1/proxy/: foo (200; 15.016467ms)
Feb  3 14:41:56.322: INFO: (7) /api/v1/namespaces/proxy-6758/pods/http:proxy-service-zkxzc-bxh7c:162/proxy/: bar (200; 10.720765ms)
Feb  3 14:41:56.323: INFO: (7) /api/v1/namespaces/proxy-6758/pods/proxy-service-zkxzc-bxh7c:160/proxy/: foo (200; 11.362748ms)
Feb  3 14:41:56.324: INFO: (7) /api/v1/namespaces/proxy-6758/pods/https:proxy-service-zkxzc-bxh7c:460/proxy/: tls baz (200; 12.220364ms)
Feb  3 14:41:56.324: INFO: (7) /api/v1/namespaces/proxy-6758/services/proxy-service-zkxzc:portname2/proxy/: bar (200; 12.739991ms)
Feb  3 14:41:56.324: INFO: (7) /api/v1/namespaces/proxy-6758/services/https:proxy-service-zkxzc:tlsportname1/proxy/: tls baz (200; 13.083287ms)
Feb  3 14:41:56.324: INFO: (7) /api/v1/namespaces/proxy-6758/pods/proxy-service-zkxzc-bxh7c/proxy/: test (200; 13.060948ms)
Feb  3 14:41:56.325: INFO: (7) /api/v1/namespaces/proxy-6758/pods/proxy-service-zkxzc-bxh7c:162/proxy/: bar (200; 13.432997ms)
Feb  3 14:41:56.325: INFO: (7) /api/v1/namespaces/proxy-6758/pods/proxy-service-zkxzc-bxh7c:1080/proxy/: test<... (200; 13.912704ms)
Feb  3 14:41:56.325: INFO: (7) /api/v1/namespaces/proxy-6758/pods/https:proxy-service-zkxzc-bxh7c:443/proxy/: ... (200; 13.855572ms)
Feb  3 14:41:56.325: INFO: (7) /api/v1/namespaces/proxy-6758/pods/http:proxy-service-zkxzc-bxh7c:160/proxy/: foo (200; 13.890712ms)
Feb  3 14:41:56.325: INFO: (7) /api/v1/namespaces/proxy-6758/services/proxy-service-zkxzc:portname1/proxy/: foo (200; 13.82522ms)
Feb  3 14:41:56.325: INFO: (7) /api/v1/namespaces/proxy-6758/pods/https:proxy-service-zkxzc-bxh7c:462/proxy/: tls qux (200; 13.848317ms)
Feb  3 14:41:56.327: INFO: (7) /api/v1/namespaces/proxy-6758/services/https:proxy-service-zkxzc:tlsportname2/proxy/: tls qux (200; 15.814398ms)
Feb  3 14:41:56.328: INFO: (7) /api/v1/namespaces/proxy-6758/services/http:proxy-service-zkxzc:portname1/proxy/: foo (200; 16.056477ms)
Feb  3 14:41:56.328: INFO: (7) /api/v1/namespaces/proxy-6758/services/http:proxy-service-zkxzc:portname2/proxy/: bar (200; 16.174404ms)
Feb  3 14:41:56.340: INFO: (8) /api/v1/namespaces/proxy-6758/pods/proxy-service-zkxzc-bxh7c/proxy/: test (200; 11.662509ms)
Feb  3 14:41:56.340: INFO: (8) /api/v1/namespaces/proxy-6758/pods/https:proxy-service-zkxzc-bxh7c:443/proxy/: test<... (200; 15.350799ms)
Feb  3 14:41:56.344: INFO: (8) /api/v1/namespaces/proxy-6758/pods/http:proxy-service-zkxzc-bxh7c:160/proxy/: foo (200; 15.947579ms)
Feb  3 14:41:56.344: INFO: (8) /api/v1/namespaces/proxy-6758/pods/proxy-service-zkxzc-bxh7c:162/proxy/: bar (200; 15.881847ms)
Feb  3 14:41:56.344: INFO: (8) /api/v1/namespaces/proxy-6758/pods/http:proxy-service-zkxzc-bxh7c:162/proxy/: bar (200; 16.10421ms)
Feb  3 14:41:56.344: INFO: (8) /api/v1/namespaces/proxy-6758/pods/http:proxy-service-zkxzc-bxh7c:1080/proxy/: ... (200; 16.148212ms)
Feb  3 14:41:56.345: INFO: (8) /api/v1/namespaces/proxy-6758/services/proxy-service-zkxzc:portname1/proxy/: foo (200; 16.561726ms)
Feb  3 14:41:56.345: INFO: (8) /api/v1/namespaces/proxy-6758/pods/https:proxy-service-zkxzc-bxh7c:460/proxy/: tls baz (200; 17.200494ms)
Feb  3 14:41:56.345: INFO: (8) /api/v1/namespaces/proxy-6758/services/http:proxy-service-zkxzc:portname1/proxy/: foo (200; 17.312791ms)
Feb  3 14:41:56.346: INFO: (8) /api/v1/namespaces/proxy-6758/services/http:proxy-service-zkxzc:portname2/proxy/: bar (200; 17.697473ms)
Feb  3 14:41:56.346: INFO: (8) /api/v1/namespaces/proxy-6758/services/proxy-service-zkxzc:portname2/proxy/: bar (200; 17.732075ms)
Feb  3 14:41:56.347: INFO: (8) /api/v1/namespaces/proxy-6758/services/https:proxy-service-zkxzc:tlsportname2/proxy/: tls qux (200; 18.579201ms)
Feb  3 14:41:56.347: INFO: (8) /api/v1/namespaces/proxy-6758/services/https:proxy-service-zkxzc:tlsportname1/proxy/: tls baz (200; 18.686951ms)
Feb  3 14:41:56.357: INFO: (9) /api/v1/namespaces/proxy-6758/pods/proxy-service-zkxzc-bxh7c/proxy/: test (200; 9.543064ms)
Feb  3 14:41:56.357: INFO: (9) /api/v1/namespaces/proxy-6758/pods/proxy-service-zkxzc-bxh7c:162/proxy/: bar (200; 9.421958ms)
Feb  3 14:41:56.358: INFO: (9) /api/v1/namespaces/proxy-6758/pods/https:proxy-service-zkxzc-bxh7c:460/proxy/: tls baz (200; 10.932708ms)
Feb  3 14:41:56.359: INFO: (9) /api/v1/namespaces/proxy-6758/pods/proxy-service-zkxzc-bxh7c:160/proxy/: foo (200; 11.901404ms)
Feb  3 14:41:56.359: INFO: (9) /api/v1/namespaces/proxy-6758/pods/proxy-service-zkxzc-bxh7c:1080/proxy/: test<... (200; 11.926347ms)
Feb  3 14:41:56.359: INFO: (9) /api/v1/namespaces/proxy-6758/pods/http:proxy-service-zkxzc-bxh7c:162/proxy/: bar (200; 12.027453ms)
Feb  3 14:41:56.359: INFO: (9) /api/v1/namespaces/proxy-6758/services/http:proxy-service-zkxzc:portname2/proxy/: bar (200; 12.356911ms)
Feb  3 14:41:56.359: INFO: (9) /api/v1/namespaces/proxy-6758/pods/https:proxy-service-zkxzc-bxh7c:462/proxy/: tls qux (200; 12.145313ms)
Feb  3 14:41:56.359: INFO: (9) /api/v1/namespaces/proxy-6758/pods/http:proxy-service-zkxzc-bxh7c:160/proxy/: foo (200; 12.129261ms)
Feb  3 14:41:56.359: INFO: (9) /api/v1/namespaces/proxy-6758/pods/http:proxy-service-zkxzc-bxh7c:1080/proxy/: ... (200; 12.344108ms)
Feb  3 14:41:56.360: INFO: (9) /api/v1/namespaces/proxy-6758/services/proxy-service-zkxzc:portname1/proxy/: foo (200; 12.792789ms)
Feb  3 14:41:56.360: INFO: (9) /api/v1/namespaces/proxy-6758/services/proxy-service-zkxzc:portname2/proxy/: bar (200; 13.378256ms)
Feb  3 14:41:56.361: INFO: (9) /api/v1/namespaces/proxy-6758/services/https:proxy-service-zkxzc:tlsportname2/proxy/: tls qux (200; 13.55605ms)
Feb  3 14:41:56.361: INFO: (9) /api/v1/namespaces/proxy-6758/services/https:proxy-service-zkxzc:tlsportname1/proxy/: tls baz (200; 13.921091ms)
Feb  3 14:41:56.361: INFO: (9) /api/v1/namespaces/proxy-6758/pods/https:proxy-service-zkxzc-bxh7c:443/proxy/: test<... (200; 9.015537ms)
Feb  3 14:41:56.375: INFO: (10) /api/v1/namespaces/proxy-6758/services/http:proxy-service-zkxzc:portname2/proxy/: bar (200; 12.308649ms)
Feb  3 14:41:56.375: INFO: (10) /api/v1/namespaces/proxy-6758/pods/proxy-service-zkxzc-bxh7c:162/proxy/: bar (200; 12.307236ms)
Feb  3 14:41:56.375: INFO: (10) /api/v1/namespaces/proxy-6758/services/https:proxy-service-zkxzc:tlsportname1/proxy/: tls baz (200; 12.85076ms)
Feb  3 14:41:56.375: INFO: (10) /api/v1/namespaces/proxy-6758/services/http:proxy-service-zkxzc:portname1/proxy/: foo (200; 13.025995ms)
Feb  3 14:41:56.376: INFO: (10) /api/v1/namespaces/proxy-6758/pods/https:proxy-service-zkxzc-bxh7c:462/proxy/: tls qux (200; 13.148933ms)
Feb  3 14:41:56.376: INFO: (10) /api/v1/namespaces/proxy-6758/services/proxy-service-zkxzc:portname1/proxy/: foo (200; 13.837847ms)
Feb  3 14:41:56.376: INFO: (10) /api/v1/namespaces/proxy-6758/services/proxy-service-zkxzc:portname2/proxy/: bar (200; 13.796938ms)
Feb  3 14:41:56.376: INFO: (10) /api/v1/namespaces/proxy-6758/pods/proxy-service-zkxzc-bxh7c/proxy/: test (200; 13.730202ms)
Feb  3 14:41:56.378: INFO: (10) /api/v1/namespaces/proxy-6758/pods/http:proxy-service-zkxzc-bxh7c:160/proxy/: foo (200; 15.537666ms)
Feb  3 14:41:56.378: INFO: (10) /api/v1/namespaces/proxy-6758/pods/http:proxy-service-zkxzc-bxh7c:1080/proxy/: ... (200; 15.743074ms)
Feb  3 14:41:56.378: INFO: (10) /api/v1/namespaces/proxy-6758/pods/http:proxy-service-zkxzc-bxh7c:162/proxy/: bar (200; 16.174427ms)
Feb  3 14:41:56.379: INFO: (10) /api/v1/namespaces/proxy-6758/pods/proxy-service-zkxzc-bxh7c:160/proxy/: foo (200; 17.151678ms)
Feb  3 14:41:56.379: INFO: (10) /api/v1/namespaces/proxy-6758/pods/https:proxy-service-zkxzc-bxh7c:443/proxy/: ... (200; 5.665524ms)
Feb  3 14:41:56.386: INFO: (11) /api/v1/namespaces/proxy-6758/pods/proxy-service-zkxzc-bxh7c:1080/proxy/: test<... (200; 5.607406ms)
Feb  3 14:41:56.394: INFO: (11) /api/v1/namespaces/proxy-6758/pods/https:proxy-service-zkxzc-bxh7c:460/proxy/: tls baz (200; 13.34293ms)
Feb  3 14:41:56.395: INFO: (11) /api/v1/namespaces/proxy-6758/pods/https:proxy-service-zkxzc-bxh7c:443/proxy/: test (200; 19.030679ms)
Feb  3 14:41:56.401: INFO: (11) /api/v1/namespaces/proxy-6758/services/http:proxy-service-zkxzc:portname1/proxy/: foo (200; 19.637882ms)
Feb  3 14:41:56.401: INFO: (11) /api/v1/namespaces/proxy-6758/services/proxy-service-zkxzc:portname1/proxy/: foo (200; 19.740017ms)
Feb  3 14:41:56.401: INFO: (11) /api/v1/namespaces/proxy-6758/services/https:proxy-service-zkxzc:tlsportname1/proxy/: tls baz (200; 20.535587ms)
Feb  3 14:41:56.412: INFO: (12) /api/v1/namespaces/proxy-6758/pods/proxy-service-zkxzc-bxh7c/proxy/: test (200; 10.016735ms)
Feb  3 14:41:56.412: INFO: (12) /api/v1/namespaces/proxy-6758/pods/https:proxy-service-zkxzc-bxh7c:443/proxy/: test<... (200; 11.645953ms)
Feb  3 14:41:56.413: INFO: (12) /api/v1/namespaces/proxy-6758/pods/http:proxy-service-zkxzc-bxh7c:160/proxy/: foo (200; 11.695759ms)
Feb  3 14:41:56.413: INFO: (12) /api/v1/namespaces/proxy-6758/pods/http:proxy-service-zkxzc-bxh7c:1080/proxy/: ... (200; 11.741561ms)
Feb  3 14:41:56.414: INFO: (12) /api/v1/namespaces/proxy-6758/services/proxy-service-zkxzc:portname2/proxy/: bar (200; 12.495734ms)
Feb  3 14:41:56.415: INFO: (12) /api/v1/namespaces/proxy-6758/services/http:proxy-service-zkxzc:portname1/proxy/: foo (200; 12.834937ms)
Feb  3 14:41:56.415: INFO: (12) /api/v1/namespaces/proxy-6758/services/https:proxy-service-zkxzc:tlsportname2/proxy/: tls qux (200; 12.914711ms)
Feb  3 14:41:56.415: INFO: (12) /api/v1/namespaces/proxy-6758/services/https:proxy-service-zkxzc:tlsportname1/proxy/: tls baz (200; 13.246899ms)
Feb  3 14:41:56.415: INFO: (12) /api/v1/namespaces/proxy-6758/services/proxy-service-zkxzc:portname1/proxy/: foo (200; 13.420324ms)
Feb  3 14:41:56.415: INFO: (12) /api/v1/namespaces/proxy-6758/services/http:proxy-service-zkxzc:portname2/proxy/: bar (200; 13.590597ms)
Feb  3 14:41:56.420: INFO: (13) /api/v1/namespaces/proxy-6758/pods/proxy-service-zkxzc-bxh7c:160/proxy/: foo (200; 4.312427ms)
Feb  3 14:41:56.423: INFO: (13) /api/v1/namespaces/proxy-6758/services/proxy-service-zkxzc:portname2/proxy/: bar (200; 8.126325ms)
Feb  3 14:41:56.424: INFO: (13) /api/v1/namespaces/proxy-6758/pods/proxy-service-zkxzc-bxh7c/proxy/: test (200; 8.716889ms)
Feb  3 14:41:56.424: INFO: (13) /api/v1/namespaces/proxy-6758/pods/proxy-service-zkxzc-bxh7c:1080/proxy/: test<... (200; 8.746804ms)
Feb  3 14:41:56.424: INFO: (13) /api/v1/namespaces/proxy-6758/services/proxy-service-zkxzc:portname1/proxy/: foo (200; 8.851516ms)
Feb  3 14:41:56.424: INFO: (13) /api/v1/namespaces/proxy-6758/pods/http:proxy-service-zkxzc-bxh7c:160/proxy/: foo (200; 8.858943ms)
Feb  3 14:41:56.424: INFO: (13) /api/v1/namespaces/proxy-6758/services/http:proxy-service-zkxzc:portname2/proxy/: bar (200; 8.945642ms)
Feb  3 14:41:56.425: INFO: (13) /api/v1/namespaces/proxy-6758/pods/https:proxy-service-zkxzc-bxh7c:460/proxy/: tls baz (200; 9.282101ms)
Feb  3 14:41:56.425: INFO: (13) /api/v1/namespaces/proxy-6758/pods/proxy-service-zkxzc-bxh7c:162/proxy/: bar (200; 9.392965ms)
Feb  3 14:41:56.425: INFO: (13) /api/v1/namespaces/proxy-6758/pods/https:proxy-service-zkxzc-bxh7c:462/proxy/: tls qux (200; 9.875141ms)
Feb  3 14:41:56.426: INFO: (13) /api/v1/namespaces/proxy-6758/pods/http:proxy-service-zkxzc-bxh7c:162/proxy/: bar (200; 10.32192ms)
Feb  3 14:41:56.426: INFO: (13) /api/v1/namespaces/proxy-6758/pods/http:proxy-service-zkxzc-bxh7c:1080/proxy/: ... (200; 10.501877ms)
Feb  3 14:41:56.427: INFO: (13) /api/v1/namespaces/proxy-6758/services/https:proxy-service-zkxzc:tlsportname1/proxy/: tls baz (200; 12.185267ms)
Feb  3 14:41:56.428: INFO: (13) /api/v1/namespaces/proxy-6758/pods/https:proxy-service-zkxzc-bxh7c:443/proxy/: test (200; 16.870494ms)
Feb  3 14:41:56.450: INFO: (14) /api/v1/namespaces/proxy-6758/pods/proxy-service-zkxzc-bxh7c:162/proxy/: bar (200; 16.96298ms)
Feb  3 14:41:56.450: INFO: (14) /api/v1/namespaces/proxy-6758/pods/proxy-service-zkxzc-bxh7c:1080/proxy/: test<... (200; 16.934397ms)
Feb  3 14:41:56.450: INFO: (14) /api/v1/namespaces/proxy-6758/pods/http:proxy-service-zkxzc-bxh7c:1080/proxy/: ... (200; 17.224854ms)
Feb  3 14:41:56.450: INFO: (14) /api/v1/namespaces/proxy-6758/pods/http:proxy-service-zkxzc-bxh7c:162/proxy/: bar (200; 17.007046ms)
Feb  3 14:41:56.450: INFO: (14) /api/v1/namespaces/proxy-6758/pods/https:proxy-service-zkxzc-bxh7c:443/proxy/: test (200; 32.304229ms)
Feb  3 14:41:56.488: INFO: (15) /api/v1/namespaces/proxy-6758/pods/http:proxy-service-zkxzc-bxh7c:1080/proxy/: ... (200; 32.412852ms)
Feb  3 14:41:56.488: INFO: (15) /api/v1/namespaces/proxy-6758/pods/proxy-service-zkxzc-bxh7c:1080/proxy/: test<... (200; 32.452993ms)
Feb  3 14:41:56.489: INFO: (15) /api/v1/namespaces/proxy-6758/pods/http:proxy-service-zkxzc-bxh7c:160/proxy/: foo (200; 33.639828ms)
Feb  3 14:41:56.489: INFO: (15) /api/v1/namespaces/proxy-6758/services/proxy-service-zkxzc:portname2/proxy/: bar (200; 33.925617ms)
Feb  3 14:41:56.497: INFO: (16) /api/v1/namespaces/proxy-6758/pods/https:proxy-service-zkxzc-bxh7c:460/proxy/: tls baz (200; 7.2593ms)
Feb  3 14:41:56.499: INFO: (16) /api/v1/namespaces/proxy-6758/pods/http:proxy-service-zkxzc-bxh7c:162/proxy/: bar (200; 9.813457ms)
Feb  3 14:41:56.499: INFO: (16) /api/v1/namespaces/proxy-6758/pods/https:proxy-service-zkxzc-bxh7c:462/proxy/: tls qux (200; 9.207867ms)
Feb  3 14:41:56.499: INFO: (16) /api/v1/namespaces/proxy-6758/pods/proxy-service-zkxzc-bxh7c:162/proxy/: bar (200; 9.414279ms)
Feb  3 14:41:56.499: INFO: (16) /api/v1/namespaces/proxy-6758/pods/proxy-service-zkxzc-bxh7c:160/proxy/: foo (200; 9.406467ms)
Feb  3 14:41:56.499: INFO: (16) /api/v1/namespaces/proxy-6758/pods/http:proxy-service-zkxzc-bxh7c:160/proxy/: foo (200; 9.351234ms)
Feb  3 14:41:56.500: INFO: (16) /api/v1/namespaces/proxy-6758/pods/proxy-service-zkxzc-bxh7c/proxy/: test (200; 10.146388ms)
Feb  3 14:41:56.500: INFO: (16) /api/v1/namespaces/proxy-6758/pods/http:proxy-service-zkxzc-bxh7c:1080/proxy/: ... (200; 10.725155ms)
Feb  3 14:41:56.502: INFO: (16) /api/v1/namespaces/proxy-6758/services/proxy-service-zkxzc:portname1/proxy/: foo (200; 12.082336ms)
Feb  3 14:41:56.503: INFO: (16) /api/v1/namespaces/proxy-6758/pods/https:proxy-service-zkxzc-bxh7c:443/proxy/: test<... (200; 12.901662ms)
Feb  3 14:41:56.503: INFO: (16) /api/v1/namespaces/proxy-6758/services/proxy-service-zkxzc:portname2/proxy/: bar (200; 13.897394ms)
Feb  3 14:41:56.504: INFO: (16) /api/v1/namespaces/proxy-6758/services/http:proxy-service-zkxzc:portname1/proxy/: foo (200; 13.246963ms)
Feb  3 14:41:56.504: INFO: (16) /api/v1/namespaces/proxy-6758/services/https:proxy-service-zkxzc:tlsportname1/proxy/: tls baz (200; 14.629815ms)
Feb  3 14:41:56.504: INFO: (16) /api/v1/namespaces/proxy-6758/services/https:proxy-service-zkxzc:tlsportname2/proxy/: tls qux (200; 13.968854ms)
Feb  3 14:41:56.538: INFO: (17) /api/v1/namespaces/proxy-6758/services/proxy-service-zkxzc:portname1/proxy/: foo (200; 33.343289ms)
Feb  3 14:41:56.538: INFO: (17) /api/v1/namespaces/proxy-6758/pods/http:proxy-service-zkxzc-bxh7c:1080/proxy/: ... (200; 33.469726ms)
Feb  3 14:41:56.538: INFO: (17) /api/v1/namespaces/proxy-6758/services/http:proxy-service-zkxzc:portname1/proxy/: foo (200; 33.120229ms)
Feb  3 14:41:56.538: INFO: (17) /api/v1/namespaces/proxy-6758/pods/proxy-service-zkxzc-bxh7c/proxy/: test (200; 32.969869ms)
Feb  3 14:41:56.538: INFO: (17) /api/v1/namespaces/proxy-6758/pods/https:proxy-service-zkxzc-bxh7c:460/proxy/: tls baz (200; 33.454787ms)
Feb  3 14:41:56.538: INFO: (17) /api/v1/namespaces/proxy-6758/services/https:proxy-service-zkxzc:tlsportname2/proxy/: tls qux (200; 33.375921ms)
Feb  3 14:41:56.539: INFO: (17) /api/v1/namespaces/proxy-6758/services/https:proxy-service-zkxzc:tlsportname1/proxy/: tls baz (200; 34.030477ms)
Feb  3 14:41:56.539: INFO: (17) /api/v1/namespaces/proxy-6758/pods/proxy-service-zkxzc-bxh7c:1080/proxy/: test<... (200; 33.807494ms)
Feb  3 14:41:56.539: INFO: (17) /api/v1/namespaces/proxy-6758/pods/http:proxy-service-zkxzc-bxh7c:160/proxy/: foo (200; 33.922456ms)
Feb  3 14:41:56.539: INFO: (17) /api/v1/namespaces/proxy-6758/pods/https:proxy-service-zkxzc-bxh7c:443/proxy/: ... (200; 8.85883ms)
Feb  3 14:41:56.551: INFO: (18) /api/v1/namespaces/proxy-6758/services/proxy-service-zkxzc:portname1/proxy/: foo (200; 10.044021ms)
Feb  3 14:41:56.551: INFO: (18) /api/v1/namespaces/proxy-6758/services/proxy-service-zkxzc:portname2/proxy/: bar (200; 10.846278ms)
Feb  3 14:41:56.552: INFO: (18) /api/v1/namespaces/proxy-6758/services/https:proxy-service-zkxzc:tlsportname1/proxy/: tls baz (200; 11.426661ms)
Feb  3 14:41:56.552: INFO: (18) /api/v1/namespaces/proxy-6758/services/http:proxy-service-zkxzc:portname1/proxy/: foo (200; 11.178992ms)
Feb  3 14:41:56.552: INFO: (18) /api/v1/namespaces/proxy-6758/services/http:proxy-service-zkxzc:portname2/proxy/: bar (200; 11.097898ms)
Feb  3 14:41:56.553: INFO: (18) /api/v1/namespaces/proxy-6758/pods/https:proxy-service-zkxzc-bxh7c:443/proxy/: test<... (200; 13.481775ms)
Feb  3 14:41:56.554: INFO: (18) /api/v1/namespaces/proxy-6758/pods/https:proxy-service-zkxzc-bxh7c:460/proxy/: tls baz (200; 13.6445ms)
Feb  3 14:41:56.555: INFO: (18) /api/v1/namespaces/proxy-6758/pods/proxy-service-zkxzc-bxh7c/proxy/: test (200; 13.81796ms)
Feb  3 14:41:56.562: INFO: (19) /api/v1/namespaces/proxy-6758/pods/http:proxy-service-zkxzc-bxh7c:162/proxy/: bar (200; 7.57508ms)
Feb  3 14:41:56.562: INFO: (19) /api/v1/namespaces/proxy-6758/pods/proxy-service-zkxzc-bxh7c:162/proxy/: bar (200; 7.629981ms)
Feb  3 14:41:56.564: INFO: (19) /api/v1/namespaces/proxy-6758/pods/https:proxy-service-zkxzc-bxh7c:462/proxy/: tls qux (200; 8.912584ms)
Feb  3 14:41:56.564: INFO: (19) /api/v1/namespaces/proxy-6758/pods/proxy-service-zkxzc-bxh7c/proxy/: test (200; 8.649641ms)
Feb  3 14:41:56.564: INFO: (19) /api/v1/namespaces/proxy-6758/pods/proxy-service-zkxzc-bxh7c:1080/proxy/: test<... (200; 8.722616ms)
Feb  3 14:41:56.564: INFO: (19) /api/v1/namespaces/proxy-6758/pods/http:proxy-service-zkxzc-bxh7c:1080/proxy/: ... (200; 9.391983ms)
Feb  3 14:41:56.568: INFO: (19) /api/v1/namespaces/proxy-6758/pods/https:proxy-service-zkxzc-bxh7c:443/proxy/: >> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Feb  3 14:42:12.948: INFO: PodSpec: initContainers in spec.initContainers
Feb  3 14:43:16.812: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-e613f402-f358-4a50-8ce4-a358bae3e4d2", GenerateName:"", Namespace:"init-container-5641", SelfLink:"/api/v1/namespaces/init-container-5641/pods/pod-init-e613f402-f358-4a50-8ce4-a358bae3e4d2", UID:"7efa0c91-32ea-49d1-b2b6-a345c69eac90", ResourceVersion:"22954228", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63716337732, loc:(*time.Location)(0x7ea48a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"948749448"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-8zr2k", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc003205d40), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-8zr2k", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-8zr2k", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-8zr2k", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00150f4a8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"iruya-node", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001fd0a20), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00150f530)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00150f570)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc00150f578), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00150f57c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716337733, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716337733, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716337733, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716337732, loc:(*time.Location)(0x7ea48a0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.3.65", PodIP:"10.44.0.1", StartTime:(*v1.Time)(0xc001a2e1e0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000aa8e70)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000aa8f50)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://1bb128ba27fa0e6ed75a2e78a6ca44351619588e8cc091f920893b3c7c0ca216"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001a2e220), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001a2e200), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 14:43:16.815: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-5641" for this suite.
Feb  3 14:43:38.848: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 14:43:38.975: INFO: namespace init-container-5641 deletion completed in 22.14994248s

• [SLOW TEST:86.183 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 14:43:38.976: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb  3 14:43:39.041: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 14:43:47.187: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-865" for this suite.
Feb  3 14:44:39.239: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 14:44:39.392: INFO: namespace pods-865 deletion completed in 52.176280082s

• [SLOW TEST:60.417 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 14:44:39.393: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating Redis RC
Feb  3 14:44:39.477: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5811'
Feb  3 14:44:39.946: INFO: stderr: ""
Feb  3 14:44:39.946: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Feb  3 14:44:40.971: INFO: Selector matched 1 pods for map[app:redis]
Feb  3 14:44:40.971: INFO: Found 0 / 1
Feb  3 14:44:42.022: INFO: Selector matched 1 pods for map[app:redis]
Feb  3 14:44:42.022: INFO: Found 0 / 1
Feb  3 14:44:42.963: INFO: Selector matched 1 pods for map[app:redis]
Feb  3 14:44:42.963: INFO: Found 0 / 1
Feb  3 14:44:43.961: INFO: Selector matched 1 pods for map[app:redis]
Feb  3 14:44:43.961: INFO: Found 0 / 1
Feb  3 14:44:44.958: INFO: Selector matched 1 pods for map[app:redis]
Feb  3 14:44:44.958: INFO: Found 0 / 1
Feb  3 14:44:45.956: INFO: Selector matched 1 pods for map[app:redis]
Feb  3 14:44:45.956: INFO: Found 0 / 1
Feb  3 14:44:46.966: INFO: Selector matched 1 pods for map[app:redis]
Feb  3 14:44:46.966: INFO: Found 1 / 1
Feb  3 14:44:46.966: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Feb  3 14:44:46.973: INFO: Selector matched 1 pods for map[app:redis]
Feb  3 14:44:46.973: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Feb  3 14:44:46.973: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-tsm54 --namespace=kubectl-5811 -p {"metadata":{"annotations":{"x":"y"}}}'
Feb  3 14:44:47.080: INFO: stderr: ""
Feb  3 14:44:47.080: INFO: stdout: "pod/redis-master-tsm54 patched\n"
STEP: checking annotations
Feb  3 14:44:47.092: INFO: Selector matched 1 pods for map[app:redis]
Feb  3 14:44:47.093: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 14:44:47.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5811" for this suite.
Feb  3 14:45:09.119: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 14:45:09.233: INFO: namespace kubectl-5811 deletion completed in 22.137408712s

• [SLOW TEST:29.841 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl patch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should add annotations for pods in rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 14:45:09.235: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod test-webserver-b798daf3-76c2-4ed2-9c05-c73401ffed50 in namespace container-probe-769
Feb  3 14:45:19.454: INFO: Started pod test-webserver-b798daf3-76c2-4ed2-9c05-c73401ffed50 in namespace container-probe-769
STEP: checking the pod's current state and verifying that restartCount is present
Feb  3 14:45:19.461: INFO: Initial restart count of pod test-webserver-b798daf3-76c2-4ed2-9c05-c73401ffed50 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 14:49:21.149: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-769" for this suite.
Feb  3 14:49:27.203: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 14:49:27.330: INFO: namespace container-probe-769 deletion completed in 6.152065162s

• [SLOW TEST:258.095 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 14:49:27.330: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override command
Feb  3 14:49:27.424: INFO: Waiting up to 5m0s for pod "client-containers-16ce93c3-5463-45b8-ad98-424b08643031" in namespace "containers-7304" to be "success or failure"
Feb  3 14:49:27.506: INFO: Pod "client-containers-16ce93c3-5463-45b8-ad98-424b08643031": Phase="Pending", Reason="", readiness=false. Elapsed: 81.211028ms
Feb  3 14:49:29.515: INFO: Pod "client-containers-16ce93c3-5463-45b8-ad98-424b08643031": Phase="Pending", Reason="", readiness=false. Elapsed: 2.090150024s
Feb  3 14:49:31.542: INFO: Pod "client-containers-16ce93c3-5463-45b8-ad98-424b08643031": Phase="Pending", Reason="", readiness=false. Elapsed: 4.117874684s
Feb  3 14:49:33.554: INFO: Pod "client-containers-16ce93c3-5463-45b8-ad98-424b08643031": Phase="Pending", Reason="", readiness=false. Elapsed: 6.129496044s
Feb  3 14:49:35.565: INFO: Pod "client-containers-16ce93c3-5463-45b8-ad98-424b08643031": Phase="Pending", Reason="", readiness=false. Elapsed: 8.140708126s
Feb  3 14:49:37.575: INFO: Pod "client-containers-16ce93c3-5463-45b8-ad98-424b08643031": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.150469649s
STEP: Saw pod success
Feb  3 14:49:37.575: INFO: Pod "client-containers-16ce93c3-5463-45b8-ad98-424b08643031" satisfied condition "success or failure"
Feb  3 14:49:37.579: INFO: Trying to get logs from node iruya-node pod client-containers-16ce93c3-5463-45b8-ad98-424b08643031 container test-container: 
STEP: delete the pod
Feb  3 14:49:37.661: INFO: Waiting for pod client-containers-16ce93c3-5463-45b8-ad98-424b08643031 to disappear
Feb  3 14:49:37.671: INFO: Pod client-containers-16ce93c3-5463-45b8-ad98-424b08643031 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 14:49:37.671: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-7304" for this suite.
Feb  3 14:49:43.720: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 14:49:43.881: INFO: namespace containers-7304 deletion completed in 6.200597318s

• [SLOW TEST:16.552 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 14:49:43.883: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb  3 14:49:44.027: INFO: Creating deployment "nginx-deployment"
Feb  3 14:49:44.047: INFO: Waiting for observed generation 1
Feb  3 14:49:46.570: INFO: Waiting for all required pods to come up
Feb  3 14:49:47.929: INFO: Pod name nginx: Found 10 pods out of 10
STEP: ensuring each pod is running
Feb  3 14:50:15.963: INFO: Waiting for deployment "nginx-deployment" to complete
Feb  3 14:50:15.973: INFO: Updating deployment "nginx-deployment" with a non-existent image
Feb  3 14:50:15.983: INFO: Updating deployment nginx-deployment
Feb  3 14:50:15.983: INFO: Waiting for observed generation 2
Feb  3 14:50:18.641: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Feb  3 14:50:19.306: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Feb  3 14:50:19.476: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Feb  3 14:50:19.495: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Feb  3 14:50:19.495: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Feb  3 14:50:19.499: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Feb  3 14:50:19.510: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas
Feb  3 14:50:19.510: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30
Feb  3 14:50:19.528: INFO: Updating deployment nginx-deployment
Feb  3 14:50:19.528: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas
Feb  3 14:50:20.779: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Feb  3 14:50:25.251: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Feb  3 14:50:27.537: INFO: Deployment "nginx-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:deployment-8360,SelfLink:/apis/apps/v1/namespaces/deployment-8360/deployments/nginx-deployment,UID:b9cfdb10-459d-4998-93c1-a9717116922b,ResourceVersion:22955142,Generation:3,CreationTimestamp:2020-02-03 14:49:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Available False 2020-02-03 14:50:20 +0000 UTC 2020-02-03 14:50:20 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-02-03 14:50:23 +0000 UTC 2020-02-03 14:49:44 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-55fb7cb77f" is progressing.}],ReadyReplicas:8,CollisionCount:nil,},}

Feb  3 14:50:30.207: INFO: New ReplicaSet "nginx-deployment-55fb7cb77f" of Deployment "nginx-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f,GenerateName:,Namespace:deployment-8360,SelfLink:/apis/apps/v1/namespaces/deployment-8360/replicasets/nginx-deployment-55fb7cb77f,UID:9ad6b3ba-32df-498b-8708-4bb552c53ad8,ResourceVersion:22955133,Generation:3,CreationTimestamp:2020-02-03 14:50:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment b9cfdb10-459d-4998-93c1-a9717116922b 0xc0023239c7 0xc0023239c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb  3 14:50:30.207: INFO: All old ReplicaSets of Deployment "nginx-deployment":
Feb  3 14:50:30.208: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498,GenerateName:,Namespace:deployment-8360,SelfLink:/apis/apps/v1/namespaces/deployment-8360/replicasets/nginx-deployment-7b8c6f4498,UID:950dc5e1-bcfa-4dab-8535-2310d4cda371,ResourceVersion:22955136,Generation:3,CreationTimestamp:2020-02-03 14:49:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment b9cfdb10-459d-4998-93c1-a9717116922b 0xc002323a97 0xc002323a98}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},}
Feb  3 14:50:31.928: INFO: Pod "nginx-deployment-55fb7cb77f-2d5cn" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-2d5cn,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8360,SelfLink:/api/v1/namespaces/deployment-8360/pods/nginx-deployment-55fb7cb77f-2d5cn,UID:1587a3bc-d529-40d8-a70f-ca324eb956c0,ResourceVersion:22955124,Generation:0,CreationTimestamp:2020-02-03 14:50:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 9ad6b3ba-32df-498b-8708-4bb552c53ad8 0xc0022d7cc7 0xc0022d7cc8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hblfw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hblfw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-hblfw true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0022d7d40} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0022d7d60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 14:50:21 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  3 14:50:31.929: INFO: Pod "nginx-deployment-55fb7cb77f-2j7hd" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-2j7hd,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8360,SelfLink:/api/v1/namespaces/deployment-8360/pods/nginx-deployment-55fb7cb77f-2j7hd,UID:d2c0fad0-a0c5-4bb3-8ac8-bf03b19c6934,ResourceVersion:22955128,Generation:0,CreationTimestamp:2020-02-03 14:50:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 9ad6b3ba-32df-498b-8708-4bb552c53ad8 0xc0022d7e07 0xc0022d7e08}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hblfw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hblfw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-hblfw true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0022d7e80} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0022d7ea0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 14:50:22 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  3 14:50:31.930: INFO: Pod "nginx-deployment-55fb7cb77f-8fg94" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-8fg94,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8360,SelfLink:/api/v1/namespaces/deployment-8360/pods/nginx-deployment-55fb7cb77f-8fg94,UID:228b5d35-cc20-4321-89dd-377edc3d2977,ResourceVersion:22955114,Generation:0,CreationTimestamp:2020-02-03 14:50:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 9ad6b3ba-32df-498b-8708-4bb552c53ad8 0xc0022d7f47 0xc0022d7f48}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hblfw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hblfw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-hblfw true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0025c2050} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0025c2070}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 14:50:21 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  3 14:50:31.931: INFO: Pod "nginx-deployment-55fb7cb77f-fbhlg" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-fbhlg,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8360,SelfLink:/api/v1/namespaces/deployment-8360/pods/nginx-deployment-55fb7cb77f-fbhlg,UID:f3b1df62-34b4-4495-abb1-879536f213e6,ResourceVersion:22955121,Generation:0,CreationTimestamp:2020-02-03 14:50:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 9ad6b3ba-32df-498b-8708-4bb552c53ad8 0xc0025c2247 0xc0025c2248}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hblfw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hblfw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-hblfw true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0025c2340} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0025c2560}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 14:50:21 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  3 14:50:31.932: INFO: Pod "nginx-deployment-55fb7cb77f-fkcj8" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-fkcj8,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8360,SelfLink:/api/v1/namespaces/deployment-8360/pods/nginx-deployment-55fb7cb77f-fkcj8,UID:0436c0dd-788c-40a2-a489-31d36091d49f,ResourceVersion:22955122,Generation:0,CreationTimestamp:2020-02-03 14:50:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 9ad6b3ba-32df-498b-8708-4bb552c53ad8 0xc0025c2617 0xc0025c2618}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hblfw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hblfw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-hblfw true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0025c2720} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0025c27c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 14:50:21 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  3 14:50:31.933: INFO: Pod "nginx-deployment-55fb7cb77f-hdk9s" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-hdk9s,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8360,SelfLink:/api/v1/namespaces/deployment-8360/pods/nginx-deployment-55fb7cb77f-hdk9s,UID:a031d7bd-951a-4515-a3e7-631e557163c1,ResourceVersion:22955052,Generation:0,CreationTimestamp:2020-02-03 14:50:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 9ad6b3ba-32df-498b-8708-4bb552c53ad8 0xc0025c2b17 0xc0025c2b18}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hblfw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hblfw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-hblfw true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0025c2c70} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0025c2c90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 14:50:16 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 14:50:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 14:50:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 14:50:16 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-02-03 14:50:16 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  3 14:50:31.933: INFO: Pod "nginx-deployment-55fb7cb77f-jp6rg" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-jp6rg,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8360,SelfLink:/api/v1/namespaces/deployment-8360/pods/nginx-deployment-55fb7cb77f-jp6rg,UID:f02e3574-3d88-4320-9210-3aa1725be190,ResourceVersion:22955123,Generation:0,CreationTimestamp:2020-02-03 14:50:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 9ad6b3ba-32df-498b-8708-4bb552c53ad8 0xc0025c2fa7 0xc0025c2fa8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hblfw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hblfw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-hblfw true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0025c3200} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0025c3220}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 14:50:21 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  3 14:50:31.934: INFO: Pod "nginx-deployment-55fb7cb77f-kt9jn" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-kt9jn,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8360,SelfLink:/api/v1/namespaces/deployment-8360/pods/nginx-deployment-55fb7cb77f-kt9jn,UID:fbf8e9c6-cd5a-4411-9cad-379fdd86a00e,ResourceVersion:22955062,Generation:0,CreationTimestamp:2020-02-03 14:50:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 9ad6b3ba-32df-498b-8708-4bb552c53ad8 0xc0025c33a7 0xc0025c33a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hblfw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hblfw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-hblfw true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0025c35a0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0025c35c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 14:50:16 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 14:50:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 14:50:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 14:50:16 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-02-03 14:50:16 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  3 14:50:31.934: INFO: Pod "nginx-deployment-55fb7cb77f-mw9cf" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-mw9cf,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8360,SelfLink:/api/v1/namespaces/deployment-8360/pods/nginx-deployment-55fb7cb77f-mw9cf,UID:fc8db14b-c8b6-407d-9945-83334503b785,ResourceVersion:22955063,Generation:0,CreationTimestamp:2020-02-03 14:50:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 9ad6b3ba-32df-498b-8708-4bb552c53ad8 0xc0025c3867 0xc0025c3868}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hblfw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hblfw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-hblfw true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0025c3a30} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0025c3a50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 14:50:16 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 14:50:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 14:50:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 14:50:16 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-02-03 14:50:16 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  3 14:50:31.935: INFO: Pod "nginx-deployment-55fb7cb77f-ngzzv" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-ngzzv,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8360,SelfLink:/api/v1/namespaces/deployment-8360/pods/nginx-deployment-55fb7cb77f-ngzzv,UID:51b13f7a-8be7-4af9-a95a-30ff0bc0d73b,ResourceVersion:22955143,Generation:0,CreationTimestamp:2020-02-03 14:50:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 9ad6b3ba-32df-498b-8708-4bb552c53ad8 0xc0025c3d47 0xc0025c3d48}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hblfw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hblfw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-hblfw true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0025c3de0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0025c3e90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 14:50:22 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 14:50:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 14:50:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 14:50:20 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-02-03 14:50:22 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  3 14:50:31.935: INFO: Pod "nginx-deployment-55fb7cb77f-pbbst" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-pbbst,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8360,SelfLink:/api/v1/namespaces/deployment-8360/pods/nginx-deployment-55fb7cb77f-pbbst,UID:5e917783-082e-495b-93d6-e04b8a3cada1,ResourceVersion:22955050,Generation:0,CreationTimestamp:2020-02-03 14:50:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 9ad6b3ba-32df-498b-8708-4bb552c53ad8 0xc002180187 0xc002180188}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hblfw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hblfw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-hblfw true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0021802e0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002180320}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 14:50:16 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 14:50:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 14:50:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 14:50:16 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-02-03 14:50:16 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  3 14:50:31.936: INFO: Pod "nginx-deployment-55fb7cb77f-qzmvx" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-qzmvx,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8360,SelfLink:/api/v1/namespaces/deployment-8360/pods/nginx-deployment-55fb7cb77f-qzmvx,UID:a651653e-e3dd-48c3-892a-30fcbe73db66,ResourceVersion:22955118,Generation:0,CreationTimestamp:2020-02-03 14:50:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 9ad6b3ba-32df-498b-8708-4bb552c53ad8 0xc002180547 0xc002180548}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hblfw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hblfw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-hblfw true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002180710} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002180790}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 14:50:21 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  3 14:50:31.937: INFO: Pod "nginx-deployment-55fb7cb77f-wcf9q" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-wcf9q,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8360,SelfLink:/api/v1/namespaces/deployment-8360/pods/nginx-deployment-55fb7cb77f-wcf9q,UID:96c2317a-e69d-4abe-8e74-08ae5937047d,ResourceVersion:22955066,Generation:0,CreationTimestamp:2020-02-03 14:50:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 9ad6b3ba-32df-498b-8708-4bb552c53ad8 0xc002180917 0xc002180918}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hblfw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hblfw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-hblfw true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002180a90} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002180ab0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 14:50:16 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 14:50:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 14:50:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 14:50:16 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-02-03 14:50:16 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  3 14:50:31.937: INFO: Pod "nginx-deployment-7b8c6f4498-2nh8j" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-2nh8j,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8360,SelfLink:/api/v1/namespaces/deployment-8360/pods/nginx-deployment-7b8c6f4498-2nh8j,UID:b2fe9f0c-7ba6-4989-b698-d1d937951480,ResourceVersion:22955130,Generation:0,CreationTimestamp:2020-02-03 14:50:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 950dc5e1-bcfa-4dab-8535-2310d4cda371 0xc002180d47 0xc002180d48}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hblfw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hblfw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-hblfw true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002180f90} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002180fb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 14:50:21 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 14:50:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 14:50:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 14:50:20 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-02-03 14:50:21 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  3 14:50:31.938: INFO: Pod "nginx-deployment-7b8c6f4498-8nmdc" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-8nmdc,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8360,SelfLink:/api/v1/namespaces/deployment-8360/pods/nginx-deployment-7b8c6f4498-8nmdc,UID:1fcf291b-69bc-4283-a0a4-1b82e0c9e6de,ResourceVersion:22954974,Generation:0,CreationTimestamp:2020-02-03 14:49:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 950dc5e1-bcfa-4dab-8535-2310d4cda371 0xc002181407 0xc002181408}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hblfw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hblfw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-hblfw true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002181640} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0021816f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 14:49:44 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 14:50:09 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 14:50:09 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 14:49:44 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.5,StartTime:2020-02-03 14:49:44 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-03 14:50:07 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://4e0b7a05dd6f1966851753dce7c9fbb9c1bc125563284bae99a45e145db00e6e}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  3 14:50:31.939: INFO: Pod "nginx-deployment-7b8c6f4498-c7zlq" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-c7zlq,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8360,SelfLink:/api/v1/namespaces/deployment-8360/pods/nginx-deployment-7b8c6f4498-c7zlq,UID:cc06f293-b639-4731-a0a4-d4aa6e93fffb,ResourceVersion:22955145,Generation:0,CreationTimestamp:2020-02-03 14:50:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 950dc5e1-bcfa-4dab-8535-2310d4cda371 0xc0021817c7 0xc0021817c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hblfw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hblfw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-hblfw true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002181840} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002181860}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 14:50:22 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 14:50:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 14:50:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 14:50:20 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-02-03 14:50:22 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  3 14:50:31.939: INFO: Pod "nginx-deployment-7b8c6f4498-grtvp" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-grtvp,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8360,SelfLink:/api/v1/namespaces/deployment-8360/pods/nginx-deployment-7b8c6f4498-grtvp,UID:64711bad-13da-47c5-85a4-137cb2e08085,ResourceVersion:22955116,Generation:0,CreationTimestamp:2020-02-03 14:50:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 950dc5e1-bcfa-4dab-8535-2310d4cda371 0xc002181927 0xc002181928}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hblfw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hblfw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-hblfw true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0021819a0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0021819c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 14:50:21 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  3 14:50:31.940: INFO: Pod "nginx-deployment-7b8c6f4498-h8cf5" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-h8cf5,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8360,SelfLink:/api/v1/namespaces/deployment-8360/pods/nginx-deployment-7b8c6f4498-h8cf5,UID:fd14e90f-98b1-4ce1-b5a3-99351029cd4f,ResourceVersion:22954970,Generation:0,CreationTimestamp:2020-02-03 14:49:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 950dc5e1-bcfa-4dab-8535-2310d4cda371 0xc002181a57 0xc002181a58}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hblfw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hblfw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-hblfw true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002181ac0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002181ae0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 14:49:44 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 14:50:07 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 14:50:07 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 14:49:44 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.7,StartTime:2020-02-03 14:49:44 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-03 14:50:07 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://471044b1bf7579ab5f14efc570f1491969479125186e5d0f167cc96849241701}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  3 14:50:31.940: INFO: Pod "nginx-deployment-7b8c6f4498-hcxsj" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-hcxsj,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8360,SelfLink:/api/v1/namespaces/deployment-8360/pods/nginx-deployment-7b8c6f4498-hcxsj,UID:57cc20fd-3b38-4f8c-bea7-c2ed1cbf349f,ResourceVersion:22955117,Generation:0,CreationTimestamp:2020-02-03 14:50:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 950dc5e1-bcfa-4dab-8535-2310d4cda371 0xc002181bc7 0xc002181bc8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hblfw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hblfw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-hblfw true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002181c40} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002181c60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 14:50:21 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  3 14:50:31.941: INFO: Pod "nginx-deployment-7b8c6f4498-hsggr" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-hsggr,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8360,SelfLink:/api/v1/namespaces/deployment-8360/pods/nginx-deployment-7b8c6f4498-hsggr,UID:6420cbfc-1a5f-4bd9-a4d6-f33b309b88e5,ResourceVersion:22955149,Generation:0,CreationTimestamp:2020-02-03 14:50:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 950dc5e1-bcfa-4dab-8535-2310d4cda371 0xc002181ce7 0xc002181ce8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hblfw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hblfw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-hblfw true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002181d50} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002181d70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 14:50:22 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 14:50:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 14:50:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 14:50:20 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-02-03 14:50:22 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  3 14:50:31.943: INFO: Pod "nginx-deployment-7b8c6f4498-j495t" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-j495t,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8360,SelfLink:/api/v1/namespaces/deployment-8360/pods/nginx-deployment-7b8c6f4498-j495t,UID:60f469e5-417e-43fb-9194-eec6595c9dd6,ResourceVersion:22955155,Generation:0,CreationTimestamp:2020-02-03 14:50:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 950dc5e1-bcfa-4dab-8535-2310d4cda371 0xc002181e37 0xc002181e38}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hblfw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hblfw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-hblfw true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002181eb0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002181ed0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 14:50:27 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 14:50:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 14:50:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 14:50:20 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-02-03 14:50:27 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  3 14:50:31.944: INFO: Pod "nginx-deployment-7b8c6f4498-k8ndd" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-k8ndd,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8360,SelfLink:/api/v1/namespaces/deployment-8360/pods/nginx-deployment-7b8c6f4498-k8ndd,UID:4201a89a-3651-4bf1-b9a9-2d6184c7b070,ResourceVersion:22955006,Generation:0,CreationTimestamp:2020-02-03 14:49:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 950dc5e1-bcfa-4dab-8535-2310d4cda371 0xc002181f97 0xc002181f98}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hblfw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hblfw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-hblfw true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0011a4020} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0011a4040}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 14:49:46 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 14:50:14 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 14:50:14 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 14:49:44 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.5,StartTime:2020-02-03 14:49:46 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-03 14:50:13 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://f3a7b03cb03cb5d7e27968a80b4a175a768c76e7c07510260eb7306a1418ce21}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  3 14:50:31.945: INFO: Pod "nginx-deployment-7b8c6f4498-k9cbs" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-k9cbs,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8360,SelfLink:/api/v1/namespaces/deployment-8360/pods/nginx-deployment-7b8c6f4498-k9cbs,UID:246aa2b9-1fe7-4f97-916b-e7b0ceb07361,ResourceVersion:22954996,Generation:0,CreationTimestamp:2020-02-03 14:49:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 950dc5e1-bcfa-4dab-8535-2310d4cda371 0xc0011a4117 0xc0011a4118}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hblfw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hblfw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-hblfw true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0011a41b0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0011a41d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 14:49:46 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 14:50:14 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 14:50:14 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 14:49:44 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.4,StartTime:2020-02-03 14:49:46 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-03 14:50:13 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://063991374f5568273bc7da522a36bbbb06a02850518fe2112f9b970bbc9ec232}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  3 14:50:31.945: INFO: Pod "nginx-deployment-7b8c6f4498-l696m" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-l696m,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8360,SelfLink:/api/v1/namespaces/deployment-8360/pods/nginx-deployment-7b8c6f4498-l696m,UID:2940a3b7-3617-4078-8edb-fe0f0cd2d527,ResourceVersion:22955115,Generation:0,CreationTimestamp:2020-02-03 14:50:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 950dc5e1-bcfa-4dab-8535-2310d4cda371 0xc0011a42b7 0xc0011a42b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hblfw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hblfw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-hblfw true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0011a4350} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0011a4380}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 14:50:21 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  3 14:50:31.945: INFO: Pod "nginx-deployment-7b8c6f4498-ld8mh" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-ld8mh,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8360,SelfLink:/api/v1/namespaces/deployment-8360/pods/nginx-deployment-7b8c6f4498-ld8mh,UID:7d52177a-fed3-4c1e-86d0-8f3c2db3c4f3,ResourceVersion:22955134,Generation:0,CreationTimestamp:2020-02-03 14:50:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 950dc5e1-bcfa-4dab-8535-2310d4cda371 0xc0011a4407 0xc0011a4408}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hblfw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hblfw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-hblfw true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0011a44a0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0011a44f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 14:50:21 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 14:50:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-03 14:50:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 14:50:20 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-02-03 14:50:21 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  3 14:50:31.946: INFO: Pod "nginx-deployment-7b8c6f4498-njnwk" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-njnwk,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8360,SelfLink:/api/v1/namespaces/deployment-8360/pods/nginx-deployment-7b8c6f4498-njnwk,UID:be9248eb-2e50-4d57-8477-fb2e4cdf57a7,ResourceVersion:22955012,Generation:0,CreationTimestamp:2020-02-03 14:49:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 950dc5e1-bcfa-4dab-8535-2310d4cda371 0xc0011a4607 0xc0011a4608}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hblfw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hblfw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-hblfw true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0011a4810} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0011a4830}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 14:49:44 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 14:50:14 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 14:50:14 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 14:49:44 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.3,StartTime:2020-02-03 14:49:44 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-03 14:50:13 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://fbec57c6edd4996e6b95f575ac03aeb44525e0ae6d6fc9f193ed0ba8d39456c1}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  3 14:50:31.947: INFO: Pod "nginx-deployment-7b8c6f4498-rpmbh" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-rpmbh,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8360,SelfLink:/api/v1/namespaces/deployment-8360/pods/nginx-deployment-7b8c6f4498-rpmbh,UID:25428013-1b49-482a-b10e-1cdf70d5a062,ResourceVersion:22954967,Generation:0,CreationTimestamp:2020-02-03 14:49:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 950dc5e1-bcfa-4dab-8535-2310d4cda371 0xc0011a4a17 0xc0011a4a18}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hblfw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hblfw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-hblfw true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0011a4aa0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0011a4ac0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 14:49:44 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 14:50:07 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 14:50:07 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 14:49:44 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.6,StartTime:2020-02-03 14:49:44 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-03 14:50:07 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://1a4ca5c2a655a11849900eb2aa7fb44fc5bbab2a0e9e8f59987a68cadca477d9}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  3 14:50:31.948: INFO: Pod "nginx-deployment-7b8c6f4498-scmn5" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-scmn5,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8360,SelfLink:/api/v1/namespaces/deployment-8360/pods/nginx-deployment-7b8c6f4498-scmn5,UID:1affffb8-395a-4c75-bf5c-239dea3e2582,ResourceVersion:22955119,Generation:0,CreationTimestamp:2020-02-03 14:50:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 950dc5e1-bcfa-4dab-8535-2310d4cda371 0xc0011a4b97 0xc0011a4b98}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hblfw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hblfw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-hblfw true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0011a4c70} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0011a4c90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 14:50:21 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  3 14:50:31.948: INFO: Pod "nginx-deployment-7b8c6f4498-t5tr4" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-t5tr4,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8360,SelfLink:/api/v1/namespaces/deployment-8360/pods/nginx-deployment-7b8c6f4498-t5tr4,UID:ab26adaa-4505-46df-ad30-467c90a3c743,ResourceVersion:22954999,Generation:0,CreationTimestamp:2020-02-03 14:49:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 950dc5e1-bcfa-4dab-8535-2310d4cda371 0xc0011a4d17 0xc0011a4d18}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hblfw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hblfw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-hblfw true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0011a4d90} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0011a4db0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 14:49:44 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 14:50:14 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 14:50:14 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 14:49:44 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.6,StartTime:2020-02-03 14:49:44 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-03 14:50:13 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://82dcf035f0743f8e7335a0f2ee87ff76e67b57ec4e0b6599c1850b0498d3b250}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  3 14:50:31.949: INFO: Pod "nginx-deployment-7b8c6f4498-tzv98" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-tzv98,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8360,SelfLink:/api/v1/namespaces/deployment-8360/pods/nginx-deployment-7b8c6f4498-tzv98,UID:d6fbb11b-8aa6-431b-98fc-776d5cbba4de,ResourceVersion:22955101,Generation:0,CreationTimestamp:2020-02-03 14:50:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 950dc5e1-bcfa-4dab-8535-2310d4cda371 0xc0011a5127 0xc0011a5128}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hblfw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hblfw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-hblfw true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0011a5580} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0011a55a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 14:50:20 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  3 14:50:31.949: INFO: Pod "nginx-deployment-7b8c6f4498-vjsvc" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-vjsvc,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8360,SelfLink:/api/v1/namespaces/deployment-8360/pods/nginx-deployment-7b8c6f4498-vjsvc,UID:b53d4d9d-09ec-4a44-9435-c1652cea988b,ResourceVersion:22954964,Generation:0,CreationTimestamp:2020-02-03 14:49:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 950dc5e1-bcfa-4dab-8535-2310d4cda371 0xc0011a58a7 0xc0011a58a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hblfw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hblfw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-hblfw true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0011a5970} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0011a59d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 14:49:44 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 14:50:07 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 14:50:07 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 14:49:44 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.4,StartTime:2020-02-03 14:49:44 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-03 14:50:07 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://93387582e9be1db4ada85e3f32d804c7516cae5b9c4be93b648782c58c5de1fc}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  3 14:50:31.950: INFO: Pod "nginx-deployment-7b8c6f4498-x8b7b" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-x8b7b,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8360,SelfLink:/api/v1/namespaces/deployment-8360/pods/nginx-deployment-7b8c6f4498-x8b7b,UID:0e21ef2c-7439-43cd-99ae-9df7b928ee8f,ResourceVersion:22955107,Generation:0,CreationTimestamp:2020-02-03 14:50:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 950dc5e1-bcfa-4dab-8535-2310d4cda371 0xc0011a5c47 0xc0011a5c48}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hblfw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hblfw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-hblfw true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0011a5f40} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0011a5f70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 14:50:21 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  3 14:50:31.951: INFO: Pod "nginx-deployment-7b8c6f4498-xmjjs" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-xmjjs,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8360,SelfLink:/api/v1/namespaces/deployment-8360/pods/nginx-deployment-7b8c6f4498-xmjjs,UID:dce8e187-cb74-4fa2-9784-3eb88596155a,ResourceVersion:22955102,Generation:0,CreationTimestamp:2020-02-03 14:50:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 950dc5e1-bcfa-4dab-8535-2310d4cda371 0xc0029bc047 0xc0029bc048}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hblfw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hblfw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-hblfw true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0029bc0c0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0029bc0e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-03 14:50:20 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 14:50:31.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-8360" for this suite.
Feb  3 14:51:23.038: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 14:51:23.192: INFO: namespace deployment-8360 deletion completed in 50.245948692s

• [SLOW TEST:99.309 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 14:51:23.193: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a replication controller
Feb  3 14:51:23.284: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9040'
Feb  3 14:51:25.341: INFO: stderr: ""
Feb  3 14:51:25.341: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb  3 14:51:25.342: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9040'
Feb  3 14:51:25.615: INFO: stderr: ""
Feb  3 14:51:25.615: INFO: stdout: "update-demo-nautilus-r8xvd update-demo-nautilus-rg55g "
Feb  3 14:51:25.616: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-r8xvd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9040'
Feb  3 14:51:25.839: INFO: stderr: ""
Feb  3 14:51:25.839: INFO: stdout: ""
Feb  3 14:51:25.839: INFO: update-demo-nautilus-r8xvd is created but not running
Feb  3 14:51:30.839: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9040'
Feb  3 14:51:31.266: INFO: stderr: ""
Feb  3 14:51:31.266: INFO: stdout: "update-demo-nautilus-r8xvd update-demo-nautilus-rg55g "
Feb  3 14:51:31.267: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-r8xvd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9040'
Feb  3 14:51:31.533: INFO: stderr: ""
Feb  3 14:51:31.534: INFO: stdout: ""
Feb  3 14:51:31.534: INFO: update-demo-nautilus-r8xvd is created but not running
Feb  3 14:51:36.535: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9040'
Feb  3 14:51:36.690: INFO: stderr: ""
Feb  3 14:51:36.690: INFO: stdout: "update-demo-nautilus-r8xvd update-demo-nautilus-rg55g "
Feb  3 14:51:36.691: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-r8xvd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9040'
Feb  3 14:51:36.779: INFO: stderr: ""
Feb  3 14:51:36.779: INFO: stdout: "true"
Feb  3 14:51:36.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-r8xvd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9040'
Feb  3 14:51:36.909: INFO: stderr: ""
Feb  3 14:51:36.909: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  3 14:51:36.909: INFO: validating pod update-demo-nautilus-r8xvd
Feb  3 14:51:36.948: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  3 14:51:36.949: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  3 14:51:36.949: INFO: update-demo-nautilus-r8xvd is verified up and running
Feb  3 14:51:36.949: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rg55g -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9040'
Feb  3 14:51:37.109: INFO: stderr: ""
Feb  3 14:51:37.109: INFO: stdout: "true"
Feb  3 14:51:37.110: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rg55g -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9040'
Feb  3 14:51:37.226: INFO: stderr: ""
Feb  3 14:51:37.226: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  3 14:51:37.226: INFO: validating pod update-demo-nautilus-rg55g
Feb  3 14:51:37.239: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  3 14:51:37.239: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  3 14:51:37.239: INFO: update-demo-nautilus-rg55g is verified up and running
STEP: using delete to clean up resources
Feb  3 14:51:37.239: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9040'
Feb  3 14:51:37.379: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  3 14:51:37.380: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Feb  3 14:51:37.380: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-9040'
Feb  3 14:51:37.506: INFO: stderr: "No resources found.\n"
Feb  3 14:51:37.507: INFO: stdout: ""
Feb  3 14:51:37.507: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-9040 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb  3 14:51:37.600: INFO: stderr: ""
Feb  3 14:51:37.600: INFO: stdout: "update-demo-nautilus-r8xvd\n"
Feb  3 14:51:38.101: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-9040'
Feb  3 14:51:38.207: INFO: stderr: "No resources found.\n"
Feb  3 14:51:38.208: INFO: stdout: ""
Feb  3 14:51:38.208: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-9040 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb  3 14:51:38.344: INFO: stderr: ""
Feb  3 14:51:38.344: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 14:51:38.344: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9040" for this suite.
Feb  3 14:52:00.371: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 14:52:00.501: INFO: namespace kubectl-9040 deletion completed in 22.150194228s

• [SLOW TEST:37.308 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create and stop a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 14:52:00.502: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-8856
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb  3 14:52:00.637: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Feb  3 14:52:40.776: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:pod-network-test-8856 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  3 14:52:40.776: INFO: >>> kubeConfig: /root/.kube/config
I0203 14:52:40.878676       8 log.go:172] (0xc00021d340) (0xc0026a8dc0) Create stream
I0203 14:52:40.878813       8 log.go:172] (0xc00021d340) (0xc0026a8dc0) Stream added, broadcasting: 1
I0203 14:52:40.887398       8 log.go:172] (0xc00021d340) Reply frame received for 1
I0203 14:52:40.887454       8 log.go:172] (0xc00021d340) (0xc000935220) Create stream
I0203 14:52:40.887470       8 log.go:172] (0xc00021d340) (0xc000935220) Stream added, broadcasting: 3
I0203 14:52:40.891476       8 log.go:172] (0xc00021d340) Reply frame received for 3
I0203 14:52:40.891647       8 log.go:172] (0xc00021d340) (0xc001b460a0) Create stream
I0203 14:52:40.891707       8 log.go:172] (0xc00021d340) (0xc001b460a0) Stream added, broadcasting: 5
I0203 14:52:40.894180       8 log.go:172] (0xc00021d340) Reply frame received for 5
I0203 14:52:41.093378       8 log.go:172] (0xc00021d340) Data frame received for 3
I0203 14:52:41.093448       8 log.go:172] (0xc000935220) (3) Data frame handling
I0203 14:52:41.093473       8 log.go:172] (0xc000935220) (3) Data frame sent
I0203 14:52:41.287012       8 log.go:172] (0xc00021d340) Data frame received for 1
I0203 14:52:41.287169       8 log.go:172] (0xc0026a8dc0) (1) Data frame handling
I0203 14:52:41.287206       8 log.go:172] (0xc0026a8dc0) (1) Data frame sent
I0203 14:52:41.288229       8 log.go:172] (0xc00021d340) (0xc0026a8dc0) Stream removed, broadcasting: 1
I0203 14:52:41.288502       8 log.go:172] (0xc00021d340) (0xc001b460a0) Stream removed, broadcasting: 5
I0203 14:52:41.288635       8 log.go:172] (0xc00021d340) (0xc000935220) Stream removed, broadcasting: 3
I0203 14:52:41.288699       8 log.go:172] (0xc00021d340) Go away received
I0203 14:52:41.288761       8 log.go:172] (0xc00021d340) (0xc0026a8dc0) Stream removed, broadcasting: 1
I0203 14:52:41.288803       8 log.go:172] (0xc00021d340) (0xc000935220) Stream removed, broadcasting: 3
I0203 14:52:41.288821       8 log.go:172] (0xc00021d340) (0xc001b460a0) Stream removed, broadcasting: 5
Feb  3 14:52:41.289: INFO: Waiting for endpoints: map[]
Feb  3 14:52:41.301: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=udp&host=10.44.0.1&port=8081&tries=1'] Namespace:pod-network-test-8856 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  3 14:52:41.301: INFO: >>> kubeConfig: /root/.kube/config
I0203 14:52:41.375946       8 log.go:172] (0xc00094a8f0) (0xc000935720) Create stream
I0203 14:52:41.376078       8 log.go:172] (0xc00094a8f0) (0xc000935720) Stream added, broadcasting: 1
I0203 14:52:41.382397       8 log.go:172] (0xc00094a8f0) Reply frame received for 1
I0203 14:52:41.382433       8 log.go:172] (0xc00094a8f0) (0xc001b463c0) Create stream
I0203 14:52:41.382444       8 log.go:172] (0xc00094a8f0) (0xc001b463c0) Stream added, broadcasting: 3
I0203 14:52:41.385101       8 log.go:172] (0xc00094a8f0) Reply frame received for 3
I0203 14:52:41.385234       8 log.go:172] (0xc00094a8f0) (0xc00151a000) Create stream
I0203 14:52:41.385248       8 log.go:172] (0xc00094a8f0) (0xc00151a000) Stream added, broadcasting: 5
I0203 14:52:41.386803       8 log.go:172] (0xc00094a8f0) Reply frame received for 5
I0203 14:52:41.480057       8 log.go:172] (0xc00094a8f0) Data frame received for 3
I0203 14:52:41.480110       8 log.go:172] (0xc001b463c0) (3) Data frame handling
I0203 14:52:41.480132       8 log.go:172] (0xc001b463c0) (3) Data frame sent
I0203 14:52:41.611849       8 log.go:172] (0xc00094a8f0) Data frame received for 1
I0203 14:52:41.612021       8 log.go:172] (0xc000935720) (1) Data frame handling
I0203 14:52:41.612092       8 log.go:172] (0xc000935720) (1) Data frame sent
I0203 14:52:41.612166       8 log.go:172] (0xc00094a8f0) (0xc001b463c0) Stream removed, broadcasting: 3
I0203 14:52:41.612307       8 log.go:172] (0xc00094a8f0) (0xc00151a000) Stream removed, broadcasting: 5
I0203 14:52:41.612419       8 log.go:172] (0xc00094a8f0) (0xc000935720) Stream removed, broadcasting: 1
I0203 14:52:41.612491       8 log.go:172] (0xc00094a8f0) Go away received
I0203 14:52:41.613193       8 log.go:172] (0xc00094a8f0) (0xc000935720) Stream removed, broadcasting: 1
I0203 14:52:41.613335       8 log.go:172] (0xc00094a8f0) (0xc001b463c0) Stream removed, broadcasting: 3
I0203 14:52:41.613364       8 log.go:172] (0xc00094a8f0) (0xc00151a000) Stream removed, broadcasting: 5
Feb  3 14:52:41.613: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 14:52:41.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-8856" for this suite.
Feb  3 14:53:05.658: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 14:53:05.810: INFO: namespace pod-network-test-8856 deletion completed in 24.187512743s

• [SLOW TEST:65.309 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 14:53:05.813: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svc-latency
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating replication controller svc-latency-rc in namespace svc-latency-1614
I0203 14:53:05.946844       8 runners.go:180] Created replication controller with name: svc-latency-rc, namespace: svc-latency-1614, replica count: 1
I0203 14:53:06.998352       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0203 14:53:07.999109       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0203 14:53:08.999541       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0203 14:53:10.000211       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0203 14:53:11.001721       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0203 14:53:12.002146       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0203 14:53:13.002486       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0203 14:53:14.003113       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0203 14:53:15.003712       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0203 14:53:16.004110       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Feb  3 14:53:16.165: INFO: Created: latency-svc-sxtbr
Feb  3 14:53:16.183: INFO: Got endpoints: latency-svc-sxtbr [78.603489ms]
Feb  3 14:53:16.269: INFO: Created: latency-svc-g4b9s
Feb  3 14:53:16.280: INFO: Got endpoints: latency-svc-g4b9s [95.487198ms]
Feb  3 14:53:16.318: INFO: Created: latency-svc-9j2pk
Feb  3 14:53:16.433: INFO: Got endpoints: latency-svc-9j2pk [249.426585ms]
Feb  3 14:53:16.452: INFO: Created: latency-svc-v8lh9
Feb  3 14:53:16.478: INFO: Got endpoints: latency-svc-v8lh9 [294.171886ms]
Feb  3 14:53:16.527: INFO: Created: latency-svc-sz8fh
Feb  3 14:53:16.580: INFO: Got endpoints: latency-svc-sz8fh [396.2056ms]
Feb  3 14:53:16.603: INFO: Created: latency-svc-snktf
Feb  3 14:53:16.610: INFO: Got endpoints: latency-svc-snktf [425.399932ms]
Feb  3 14:53:16.653: INFO: Created: latency-svc-7vnhl
Feb  3 14:53:16.672: INFO: Got endpoints: latency-svc-7vnhl [487.337092ms]
Feb  3 14:53:16.753: INFO: Created: latency-svc-9dr2n
Feb  3 14:53:16.775: INFO: Got endpoints: latency-svc-9dr2n [589.661787ms]
Feb  3 14:53:16.854: INFO: Created: latency-svc-xfx4j
Feb  3 14:53:17.067: INFO: Got endpoints: latency-svc-xfx4j [881.789816ms]
Feb  3 14:53:17.079: INFO: Created: latency-svc-j8fws
Feb  3 14:53:17.095: INFO: Got endpoints: latency-svc-j8fws [910.301142ms]
Feb  3 14:53:17.153: INFO: Created: latency-svc-5klqz
Feb  3 14:53:17.378: INFO: Got endpoints: latency-svc-5klqz [1.193193747s]
Feb  3 14:53:17.409: INFO: Created: latency-svc-ccvmg
Feb  3 14:53:17.441: INFO: Got endpoints: latency-svc-ccvmg [1.255915241s]
Feb  3 14:53:17.462: INFO: Created: latency-svc-v69sf
Feb  3 14:53:17.474: INFO: Got endpoints: latency-svc-v69sf [1.28994501s]
Feb  3 14:53:17.564: INFO: Created: latency-svc-42g9p
Feb  3 14:53:17.598: INFO: Created: latency-svc-ct9cb
Feb  3 14:53:17.600: INFO: Got endpoints: latency-svc-42g9p [1.414901219s]
Feb  3 14:53:17.622: INFO: Got endpoints: latency-svc-ct9cb [1.436870271s]
Feb  3 14:53:17.714: INFO: Created: latency-svc-t7g52
Feb  3 14:53:17.718: INFO: Got endpoints: latency-svc-t7g52 [1.53359953s]
Feb  3 14:53:17.755: INFO: Created: latency-svc-zdc8f
Feb  3 14:53:17.761: INFO: Got endpoints: latency-svc-zdc8f [139.660979ms]
Feb  3 14:53:17.849: INFO: Created: latency-svc-h9h5v
Feb  3 14:53:17.882: INFO: Got endpoints: latency-svc-h9h5v [1.602184463s]
Feb  3 14:53:17.899: INFO: Created: latency-svc-dk84q
Feb  3 14:53:17.938: INFO: Got endpoints: latency-svc-dk84q [1.505363674s]
Feb  3 14:53:17.940: INFO: Created: latency-svc-96zfb
Feb  3 14:53:18.034: INFO: Got endpoints: latency-svc-96zfb [1.555428981s]
Feb  3 14:53:18.068: INFO: Created: latency-svc-stvmd
Feb  3 14:53:18.068: INFO: Got endpoints: latency-svc-stvmd [1.48831698s]
Feb  3 14:53:18.115: INFO: Created: latency-svc-c2wkp
Feb  3 14:53:18.122: INFO: Got endpoints: latency-svc-c2wkp [1.511762072s]
Feb  3 14:53:18.179: INFO: Created: latency-svc-wj7jg
Feb  3 14:53:18.179: INFO: Got endpoints: latency-svc-wj7jg [1.50572174s]
Feb  3 14:53:18.216: INFO: Created: latency-svc-mj2h4
Feb  3 14:53:18.217: INFO: Got endpoints: latency-svc-mj2h4 [1.442487634s]
Feb  3 14:53:18.262: INFO: Created: latency-svc-vjw9h
Feb  3 14:53:18.353: INFO: Got endpoints: latency-svc-vjw9h [1.285923475s]
Feb  3 14:53:18.382: INFO: Created: latency-svc-ndbzz
Feb  3 14:53:18.391: INFO: Got endpoints: latency-svc-ndbzz [1.295961177s]
Feb  3 14:53:18.616: INFO: Created: latency-svc-bdxcb
Feb  3 14:53:18.623: INFO: Got endpoints: latency-svc-bdxcb [1.245254541s]
Feb  3 14:53:18.670: INFO: Created: latency-svc-cz84t
Feb  3 14:53:18.674: INFO: Got endpoints: latency-svc-cz84t [1.233078732s]
Feb  3 14:53:18.755: INFO: Created: latency-svc-5nvk8
Feb  3 14:53:18.770: INFO: Got endpoints: latency-svc-5nvk8 [1.296590567s]
Feb  3 14:53:18.792: INFO: Created: latency-svc-6276m
Feb  3 14:53:18.795: INFO: Got endpoints: latency-svc-6276m [1.195057291s]
Feb  3 14:53:18.828: INFO: Created: latency-svc-lmshm
Feb  3 14:53:18.832: INFO: Got endpoints: latency-svc-lmshm [1.113873165s]
Feb  3 14:53:18.939: INFO: Created: latency-svc-lxgpg
Feb  3 14:53:18.953: INFO: Got endpoints: latency-svc-lxgpg [1.191279081s]
Feb  3 14:53:19.080: INFO: Created: latency-svc-zhjv6
Feb  3 14:53:19.091: INFO: Got endpoints: latency-svc-zhjv6 [1.208346215s]
Feb  3 14:53:19.149: INFO: Created: latency-svc-rkmhn
Feb  3 14:53:19.152: INFO: Got endpoints: latency-svc-rkmhn [1.213116094s]
Feb  3 14:53:19.231: INFO: Created: latency-svc-lb9bx
Feb  3 14:53:19.250: INFO: Got endpoints: latency-svc-lb9bx [1.216305248s]
Feb  3 14:53:19.297: INFO: Created: latency-svc-7fb8d
Feb  3 14:53:19.299: INFO: Got endpoints: latency-svc-7fb8d [1.230561962s]
Feb  3 14:53:19.431: INFO: Created: latency-svc-jbwhl
Feb  3 14:53:19.438: INFO: Got endpoints: latency-svc-jbwhl [1.315010596s]
Feb  3 14:53:19.485: INFO: Created: latency-svc-c7tjp
Feb  3 14:53:19.487: INFO: Got endpoints: latency-svc-c7tjp [1.308580922s]
Feb  3 14:53:19.571: INFO: Created: latency-svc-d9nzc
Feb  3 14:53:19.587: INFO: Got endpoints: latency-svc-d9nzc [1.369901683s]
Feb  3 14:53:19.633: INFO: Created: latency-svc-g8fqs
Feb  3 14:53:19.644: INFO: Got endpoints: latency-svc-g8fqs [1.290242655s]
Feb  3 14:53:19.726: INFO: Created: latency-svc-b9trc
Feb  3 14:53:19.732: INFO: Got endpoints: latency-svc-b9trc [1.341005158s]
Feb  3 14:53:19.762: INFO: Created: latency-svc-s2r22
Feb  3 14:53:19.766: INFO: Got endpoints: latency-svc-s2r22 [1.142396351s]
Feb  3 14:53:19.835: INFO: Created: latency-svc-qt764
Feb  3 14:53:19.878: INFO: Got endpoints: latency-svc-qt764 [1.20344047s]
Feb  3 14:53:19.896: INFO: Created: latency-svc-wlp52
Feb  3 14:53:19.931: INFO: Got endpoints: latency-svc-wlp52 [1.160543141s]
Feb  3 14:53:19.971: INFO: Created: latency-svc-wbczx
Feb  3 14:53:20.023: INFO: Got endpoints: latency-svc-wbczx [1.228330963s]
Feb  3 14:53:20.050: INFO: Created: latency-svc-tfcnn
Feb  3 14:53:20.091: INFO: Created: latency-svc-gvtxt
Feb  3 14:53:20.093: INFO: Got endpoints: latency-svc-tfcnn [1.260834754s]
Feb  3 14:53:20.099: INFO: Got endpoints: latency-svc-gvtxt [1.145626832s]
Feb  3 14:53:20.179: INFO: Created: latency-svc-2vkg8
Feb  3 14:53:20.188: INFO: Got endpoints: latency-svc-2vkg8 [1.096490354s]
Feb  3 14:53:20.226: INFO: Created: latency-svc-xlnnz
Feb  3 14:53:20.239: INFO: Got endpoints: latency-svc-xlnnz [1.087165563s]
Feb  3 14:53:20.384: INFO: Created: latency-svc-2jcwl
Feb  3 14:53:20.390: INFO: Got endpoints: latency-svc-2jcwl [1.13914173s]
Feb  3 14:53:20.426: INFO: Created: latency-svc-5tc6g
Feb  3 14:53:20.432: INFO: Got endpoints: latency-svc-5tc6g [1.132469263s]
Feb  3 14:53:20.615: INFO: Created: latency-svc-gtl85
Feb  3 14:53:20.622: INFO: Got endpoints: latency-svc-gtl85 [1.184446694s]
Feb  3 14:53:20.814: INFO: Created: latency-svc-cw9nh
Feb  3 14:53:20.835: INFO: Got endpoints: latency-svc-cw9nh [1.347494257s]
Feb  3 14:53:20.882: INFO: Created: latency-svc-qm7nt
Feb  3 14:53:20.908: INFO: Got endpoints: latency-svc-qm7nt [1.320958098s]
Feb  3 14:53:21.051: INFO: Created: latency-svc-v64t9
Feb  3 14:53:21.062: INFO: Got endpoints: latency-svc-v64t9 [1.417873098s]
Feb  3 14:53:21.158: INFO: Created: latency-svc-4bgmq
Feb  3 14:53:21.170: INFO: Got endpoints: latency-svc-4bgmq [1.43714719s]
Feb  3 14:53:21.224: INFO: Created: latency-svc-mnmxm
Feb  3 14:53:21.227: INFO: Got endpoints: latency-svc-mnmxm [1.460431674s]
Feb  3 14:53:21.395: INFO: Created: latency-svc-77psr
Feb  3 14:53:21.417: INFO: Got endpoints: latency-svc-77psr [1.539314062s]
Feb  3 14:53:21.443: INFO: Created: latency-svc-hhlht
Feb  3 14:53:21.531: INFO: Got endpoints: latency-svc-hhlht [1.59995994s]
Feb  3 14:53:21.605: INFO: Created: latency-svc-x9gbr
Feb  3 14:53:21.607: INFO: Got endpoints: latency-svc-x9gbr [1.583269449s]
Feb  3 14:53:21.691: INFO: Created: latency-svc-qb7mk
Feb  3 14:53:21.702: INFO: Got endpoints: latency-svc-qb7mk [1.608662425s]
Feb  3 14:53:21.762: INFO: Created: latency-svc-rns49
Feb  3 14:53:21.783: INFO: Got endpoints: latency-svc-rns49 [1.683625333s]
Feb  3 14:53:21.928: INFO: Created: latency-svc-f2wkg
Feb  3 14:53:21.942: INFO: Got endpoints: latency-svc-f2wkg [1.753845057s]
Feb  3 14:53:22.019: INFO: Created: latency-svc-vt2qt
Feb  3 14:53:22.027: INFO: Got endpoints: latency-svc-vt2qt [1.787683889s]
Feb  3 14:53:22.062: INFO: Created: latency-svc-kmsnt
Feb  3 14:53:22.071: INFO: Got endpoints: latency-svc-kmsnt [1.681378531s]
Feb  3 14:53:22.108: INFO: Created: latency-svc-hzpr6
Feb  3 14:53:22.192: INFO: Got endpoints: latency-svc-hzpr6 [1.760596512s]
Feb  3 14:53:22.219: INFO: Created: latency-svc-nm5fw
Feb  3 14:53:22.232: INFO: Got endpoints: latency-svc-nm5fw [1.609540118s]
Feb  3 14:53:22.289: INFO: Created: latency-svc-58vjw
Feb  3 14:53:22.351: INFO: Got endpoints: latency-svc-58vjw [1.515374825s]
Feb  3 14:53:22.389: INFO: Created: latency-svc-pljqt
Feb  3 14:53:22.422: INFO: Got endpoints: latency-svc-pljqt [1.512779305s]
Feb  3 14:53:22.518: INFO: Created: latency-svc-mztgp
Feb  3 14:53:22.535: INFO: Got endpoints: latency-svc-mztgp [1.472983558s]
Feb  3 14:53:22.667: INFO: Created: latency-svc-pb5rg
Feb  3 14:53:22.688: INFO: Got endpoints: latency-svc-pb5rg [1.518015056s]
Feb  3 14:53:22.728: INFO: Created: latency-svc-jtjlr
Feb  3 14:53:22.883: INFO: Got endpoints: latency-svc-jtjlr [1.655670871s]
Feb  3 14:53:22.884: INFO: Created: latency-svc-k48zb
Feb  3 14:53:22.924: INFO: Created: latency-svc-wvzmh
Feb  3 14:53:22.924: INFO: Got endpoints: latency-svc-k48zb [1.506259039s]
Feb  3 14:53:22.933: INFO: Got endpoints: latency-svc-wvzmh [1.400717879s]
Feb  3 14:53:23.055: INFO: Created: latency-svc-m8fqf
Feb  3 14:53:23.067: INFO: Got endpoints: latency-svc-m8fqf [1.459686932s]
Feb  3 14:53:23.110: INFO: Created: latency-svc-lhh9n
Feb  3 14:53:23.124: INFO: Got endpoints: latency-svc-lhh9n [1.421950268s]
Feb  3 14:53:23.195: INFO: Created: latency-svc-w76p6
Feb  3 14:53:23.198: INFO: Got endpoints: latency-svc-w76p6 [1.414990779s]
Feb  3 14:53:23.232: INFO: Created: latency-svc-kmbcx
Feb  3 14:53:23.244: INFO: Got endpoints: latency-svc-kmbcx [1.30210246s]
Feb  3 14:53:23.296: INFO: Created: latency-svc-xschh
Feb  3 14:53:23.370: INFO: Got endpoints: latency-svc-xschh [1.342365526s]
Feb  3 14:53:23.404: INFO: Created: latency-svc-5j7pg
Feb  3 14:53:23.424: INFO: Got endpoints: latency-svc-5j7pg [1.352359125s]
Feb  3 14:53:23.528: INFO: Created: latency-svc-qw2s8
Feb  3 14:53:23.533: INFO: Got endpoints: latency-svc-qw2s8 [1.340588256s]
Feb  3 14:53:23.596: INFO: Created: latency-svc-bftqt
Feb  3 14:53:23.727: INFO: Got endpoints: latency-svc-bftqt [1.494442241s]
Feb  3 14:53:23.729: INFO: Created: latency-svc-fkcp7
Feb  3 14:53:23.767: INFO: Got endpoints: latency-svc-fkcp7 [1.415326872s]
Feb  3 14:53:23.965: INFO: Created: latency-svc-bxm5v
Feb  3 14:53:23.994: INFO: Created: latency-svc-wdj89
Feb  3 14:53:23.998: INFO: Got endpoints: latency-svc-bxm5v [1.575790768s]
Feb  3 14:53:24.010: INFO: Got endpoints: latency-svc-wdj89 [1.473909088s]
Feb  3 14:53:24.240: INFO: Created: latency-svc-nb2sr
Feb  3 14:53:24.243: INFO: Got endpoints: latency-svc-nb2sr [1.554245507s]
Feb  3 14:53:24.295: INFO: Created: latency-svc-lbvnh
Feb  3 14:53:24.521: INFO: Created: latency-svc-tthb6
Feb  3 14:53:24.522: INFO: Got endpoints: latency-svc-lbvnh [1.638497724s]
Feb  3 14:53:24.605: INFO: Created: latency-svc-nwzt9
Feb  3 14:53:24.606: INFO: Got endpoints: latency-svc-tthb6 [1.681917332s]
Feb  3 14:53:24.780: INFO: Got endpoints: latency-svc-nwzt9 [1.846903109s]
Feb  3 14:53:24.809: INFO: Created: latency-svc-x5tfw
Feb  3 14:53:24.825: INFO: Got endpoints: latency-svc-x5tfw [1.757418321s]
Feb  3 14:53:24.949: INFO: Created: latency-svc-czbj2
Feb  3 14:53:24.949: INFO: Got endpoints: latency-svc-czbj2 [1.825241111s]
Feb  3 14:53:24.987: INFO: Created: latency-svc-28twr
Feb  3 14:53:25.005: INFO: Got endpoints: latency-svc-28twr [1.807136288s]
Feb  3 14:53:25.116: INFO: Created: latency-svc-d6sz8
Feb  3 14:53:25.150: INFO: Got endpoints: latency-svc-d6sz8 [1.905189388s]
Feb  3 14:53:25.154: INFO: Created: latency-svc-g29gm
Feb  3 14:53:25.196: INFO: Got endpoints: latency-svc-g29gm [1.826309635s]
Feb  3 14:53:25.201: INFO: Created: latency-svc-mvvd4
Feb  3 14:53:25.358: INFO: Got endpoints: latency-svc-mvvd4 [1.934396011s]
Feb  3 14:53:25.381: INFO: Created: latency-svc-rwmkm
Feb  3 14:53:25.392: INFO: Got endpoints: latency-svc-rwmkm [1.858596344s]
Feb  3 14:53:25.432: INFO: Created: latency-svc-mwnrn
Feb  3 14:53:25.440: INFO: Got endpoints: latency-svc-mwnrn [1.712278482s]
Feb  3 14:53:25.657: INFO: Created: latency-svc-6xbvp
Feb  3 14:53:25.666: INFO: Got endpoints: latency-svc-6xbvp [1.898930831s]
Feb  3 14:53:25.707: INFO: Created: latency-svc-kvmp4
Feb  3 14:53:25.720: INFO: Got endpoints: latency-svc-kvmp4 [1.721182972s]
Feb  3 14:53:25.854: INFO: Created: latency-svc-jrrhr
Feb  3 14:53:25.873: INFO: Created: latency-svc-cb9nn
Feb  3 14:53:25.874: INFO: Got endpoints: latency-svc-cb9nn [1.863741473s]
Feb  3 14:53:25.877: INFO: Got endpoints: latency-svc-jrrhr [1.633892978s]
Feb  3 14:53:25.921: INFO: Created: latency-svc-fsmp4
Feb  3 14:53:26.036: INFO: Got endpoints: latency-svc-fsmp4 [1.513927553s]
Feb  3 14:53:26.045: INFO: Created: latency-svc-xnqfp
Feb  3 14:53:26.089: INFO: Created: latency-svc-wf445
Feb  3 14:53:26.103: INFO: Got endpoints: latency-svc-xnqfp [1.496894885s]
Feb  3 14:53:26.134: INFO: Got endpoints: latency-svc-wf445 [1.354138648s]
Feb  3 14:53:26.140: INFO: Created: latency-svc-l5687
Feb  3 14:53:26.210: INFO: Got endpoints: latency-svc-l5687 [1.385381494s]
Feb  3 14:53:26.252: INFO: Created: latency-svc-bq22g
Feb  3 14:53:26.259: INFO: Got endpoints: latency-svc-bq22g [1.309727514s]
Feb  3 14:53:26.311: INFO: Created: latency-svc-rs862
Feb  3 14:53:26.311: INFO: Got endpoints: latency-svc-rs862 [1.305533815s]
Feb  3 14:53:26.409: INFO: Created: latency-svc-rm5p8
Feb  3 14:53:26.472: INFO: Created: latency-svc-c44b6
Feb  3 14:53:26.472: INFO: Got endpoints: latency-svc-rm5p8 [1.322036157s]
Feb  3 14:53:26.476: INFO: Got endpoints: latency-svc-c44b6 [1.278854074s]
Feb  3 14:53:26.618: INFO: Created: latency-svc-c8wlf
Feb  3 14:53:26.618: INFO: Got endpoints: latency-svc-c8wlf [1.259106546s]
Feb  3 14:53:26.695: INFO: Created: latency-svc-rpxgt
Feb  3 14:53:26.774: INFO: Got endpoints: latency-svc-rpxgt [1.381266211s]
Feb  3 14:53:26.796: INFO: Created: latency-svc-sk9bt
Feb  3 14:53:26.799: INFO: Got endpoints: latency-svc-sk9bt [1.358984016s]
Feb  3 14:53:27.010: INFO: Created: latency-svc-rfrqv
Feb  3 14:53:27.016: INFO: Got endpoints: latency-svc-rfrqv [1.349506345s]
Feb  3 14:53:27.104: INFO: Created: latency-svc-xbccr
Feb  3 14:53:27.155: INFO: Got endpoints: latency-svc-xbccr [1.43533419s]
Feb  3 14:53:27.180: INFO: Created: latency-svc-gbvnw
Feb  3 14:53:27.183: INFO: Got endpoints: latency-svc-gbvnw [1.309059938s]
Feb  3 14:53:27.223: INFO: Created: latency-svc-xz28v
Feb  3 14:53:27.243: INFO: Got endpoints: latency-svc-xz28v [1.365263077s]
Feb  3 14:53:27.345: INFO: Created: latency-svc-bfhnr
Feb  3 14:53:27.354: INFO: Got endpoints: latency-svc-bfhnr [1.317381513s]
Feb  3 14:53:27.531: INFO: Created: latency-svc-k8gtd
Feb  3 14:53:27.542: INFO: Got endpoints: latency-svc-k8gtd [1.438623513s]
Feb  3 14:53:27.712: INFO: Created: latency-svc-67d8x
Feb  3 14:53:27.722: INFO: Got endpoints: latency-svc-67d8x [1.587598649s]
Feb  3 14:53:27.784: INFO: Created: latency-svc-kp62j
Feb  3 14:53:27.787: INFO: Got endpoints: latency-svc-kp62j [1.576714725s]
Feb  3 14:53:27.886: INFO: Created: latency-svc-xffcp
Feb  3 14:53:27.894: INFO: Got endpoints: latency-svc-xffcp [1.634886897s]
Feb  3 14:53:27.946: INFO: Created: latency-svc-7lvdm
Feb  3 14:53:27.958: INFO: Got endpoints: latency-svc-7lvdm [1.647198986s]
Feb  3 14:53:28.061: INFO: Created: latency-svc-5vnhl
Feb  3 14:53:28.071: INFO: Got endpoints: latency-svc-5vnhl [1.598875529s]
Feb  3 14:53:28.100: INFO: Created: latency-svc-99sn6
Feb  3 14:53:28.112: INFO: Got endpoints: latency-svc-99sn6 [1.636512183s]
Feb  3 14:53:28.150: INFO: Created: latency-svc-vr9b7
Feb  3 14:53:28.202: INFO: Got endpoints: latency-svc-vr9b7 [1.583798278s]
Feb  3 14:53:28.234: INFO: Created: latency-svc-pv96v
Feb  3 14:53:28.239: INFO: Got endpoints: latency-svc-pv96v [1.464860623s]
Feb  3 14:53:28.276: INFO: Created: latency-svc-wqfcf
Feb  3 14:53:28.288: INFO: Got endpoints: latency-svc-wqfcf [1.488621036s]
Feb  3 14:53:28.373: INFO: Created: latency-svc-zmzmp
Feb  3 14:53:28.381: INFO: Got endpoints: latency-svc-zmzmp [1.364432111s]
Feb  3 14:53:28.445: INFO: Created: latency-svc-xqhrk
Feb  3 14:53:28.468: INFO: Got endpoints: latency-svc-xqhrk [1.312911625s]
Feb  3 14:53:28.567: INFO: Created: latency-svc-pwf5w
Feb  3 14:53:28.592: INFO: Got endpoints: latency-svc-pwf5w [1.409367929s]
Feb  3 14:53:28.642: INFO: Created: latency-svc-wxbpp
Feb  3 14:53:28.717: INFO: Got endpoints: latency-svc-wxbpp [1.474109872s]
Feb  3 14:53:28.747: INFO: Created: latency-svc-hwjpm
Feb  3 14:53:28.783: INFO: Got endpoints: latency-svc-hwjpm [1.428652378s]
Feb  3 14:53:28.813: INFO: Created: latency-svc-p8gqf
Feb  3 14:53:28.946: INFO: Got endpoints: latency-svc-p8gqf [1.404084281s]
Feb  3 14:53:29.022: INFO: Created: latency-svc-5x2km
Feb  3 14:53:29.033: INFO: Got endpoints: latency-svc-5x2km [1.310387985s]
Feb  3 14:53:29.154: INFO: Created: latency-svc-zktr9
Feb  3 14:53:29.169: INFO: Got endpoints: latency-svc-zktr9 [1.38152052s]
Feb  3 14:53:29.223: INFO: Created: latency-svc-b5kjq
Feb  3 14:53:29.284: INFO: Got endpoints: latency-svc-b5kjq [1.390053666s]
Feb  3 14:53:29.306: INFO: Created: latency-svc-k8jlf
Feb  3 14:53:29.312: INFO: Got endpoints: latency-svc-k8jlf [1.353218341s]
Feb  3 14:53:29.379: INFO: Created: latency-svc-qx4rx
Feb  3 14:53:29.529: INFO: Got endpoints: latency-svc-qx4rx [1.457299292s]
Feb  3 14:53:29.741: INFO: Created: latency-svc-9kpx9
Feb  3 14:53:29.773: INFO: Got endpoints: latency-svc-9kpx9 [1.660609803s]
Feb  3 14:53:29.816: INFO: Created: latency-svc-26lv7
Feb  3 14:53:29.890: INFO: Got endpoints: latency-svc-26lv7 [1.687558025s]
Feb  3 14:53:29.914: INFO: Created: latency-svc-s4lpf
Feb  3 14:53:29.922: INFO: Got endpoints: latency-svc-s4lpf [1.682126626s]
Feb  3 14:53:29.980: INFO: Created: latency-svc-7wqxw
Feb  3 14:53:30.053: INFO: Got endpoints: latency-svc-7wqxw [1.764845526s]
Feb  3 14:53:30.091: INFO: Created: latency-svc-xsl9f
Feb  3 14:53:30.148: INFO: Created: latency-svc-7qg5p
Feb  3 14:53:30.148: INFO: Got endpoints: latency-svc-xsl9f [1.767081846s]
Feb  3 14:53:30.214: INFO: Got endpoints: latency-svc-7qg5p [1.744630995s]
Feb  3 14:53:30.252: INFO: Created: latency-svc-brp4b
Feb  3 14:53:30.252: INFO: Got endpoints: latency-svc-brp4b [1.659168643s]
Feb  3 14:53:30.293: INFO: Created: latency-svc-82bbp
Feb  3 14:53:30.298: INFO: Got endpoints: latency-svc-82bbp [1.579891055s]
Feb  3 14:53:30.384: INFO: Created: latency-svc-qmxhs
Feb  3 14:53:30.385: INFO: Got endpoints: latency-svc-qmxhs [1.602193937s]
Feb  3 14:53:30.443: INFO: Created: latency-svc-cfjhq
Feb  3 14:53:30.643: INFO: Got endpoints: latency-svc-cfjhq [1.696425595s]
Feb  3 14:53:30.664: INFO: Created: latency-svc-xdfjh
Feb  3 14:53:30.842: INFO: Got endpoints: latency-svc-xdfjh [1.808590184s]
Feb  3 14:53:30.847: INFO: Created: latency-svc-9tlwj
Feb  3 14:53:30.899: INFO: Got endpoints: latency-svc-9tlwj [1.730310053s]
Feb  3 14:53:30.930: INFO: Created: latency-svc-tzhwn
Feb  3 14:53:30.931: INFO: Got endpoints: latency-svc-tzhwn [1.646041752s]
Feb  3 14:53:31.067: INFO: Created: latency-svc-t59mx
Feb  3 14:53:31.072: INFO: Got endpoints: latency-svc-t59mx [1.760040972s]
Feb  3 14:53:31.105: INFO: Created: latency-svc-mbsp7
Feb  3 14:53:31.116: INFO: Got endpoints: latency-svc-mbsp7 [1.586693092s]
Feb  3 14:53:31.200: INFO: Created: latency-svc-mszht
Feb  3 14:53:31.205: INFO: Got endpoints: latency-svc-mszht [1.431590404s]
Feb  3 14:53:31.238: INFO: Created: latency-svc-4r6g4
Feb  3 14:53:31.243: INFO: Got endpoints: latency-svc-4r6g4 [1.352826651s]
Feb  3 14:53:31.279: INFO: Created: latency-svc-l5vcd
Feb  3 14:53:31.340: INFO: Got endpoints: latency-svc-l5vcd [1.417815568s]
Feb  3 14:53:31.355: INFO: Created: latency-svc-2l4dh
Feb  3 14:53:31.364: INFO: Got endpoints: latency-svc-2l4dh [1.310799831s]
Feb  3 14:53:31.434: INFO: Created: latency-svc-2h26g
Feb  3 14:53:31.436: INFO: Got endpoints: latency-svc-2h26g [1.28763729s]
Feb  3 14:53:31.510: INFO: Created: latency-svc-4lpvc
Feb  3 14:53:31.559: INFO: Created: latency-svc-mz2mz
Feb  3 14:53:31.559: INFO: Got endpoints: latency-svc-4lpvc [1.345255833s]
Feb  3 14:53:31.565: INFO: Got endpoints: latency-svc-mz2mz [1.313051998s]
Feb  3 14:53:31.696: INFO: Created: latency-svc-bwqr5
Feb  3 14:53:31.704: INFO: Got endpoints: latency-svc-bwqr5 [1.40584811s]
Feb  3 14:53:31.762: INFO: Created: latency-svc-gps2s
Feb  3 14:53:31.766: INFO: Got endpoints: latency-svc-gps2s [1.381155193s]
Feb  3 14:53:31.923: INFO: Created: latency-svc-m4wbl
Feb  3 14:53:31.943: INFO: Got endpoints: latency-svc-m4wbl [1.299790102s]
Feb  3 14:53:32.008: INFO: Created: latency-svc-25ptr
Feb  3 14:53:32.082: INFO: Got endpoints: latency-svc-25ptr [1.239394216s]
Feb  3 14:53:32.115: INFO: Created: latency-svc-tzv57
Feb  3 14:53:32.118: INFO: Got endpoints: latency-svc-tzv57 [1.21784743s]
Feb  3 14:53:32.158: INFO: Created: latency-svc-bdrqz
Feb  3 14:53:32.227: INFO: Got endpoints: latency-svc-bdrqz [1.295752319s]
Feb  3 14:53:32.273: INFO: Created: latency-svc-89v2x
Feb  3 14:53:32.274: INFO: Got endpoints: latency-svc-89v2x [1.202287581s]
Feb  3 14:53:32.324: INFO: Created: latency-svc-j4wmd
Feb  3 14:53:32.378: INFO: Got endpoints: latency-svc-j4wmd [1.261702282s]
Feb  3 14:53:32.416: INFO: Created: latency-svc-s4mtb
Feb  3 14:53:32.429: INFO: Got endpoints: latency-svc-s4mtb [1.223465424s]
Feb  3 14:53:32.454: INFO: Created: latency-svc-ntr5t
Feb  3 14:53:32.471: INFO: Got endpoints: latency-svc-ntr5t [1.227521502s]
Feb  3 14:53:32.569: INFO: Created: latency-svc-m5cgc
Feb  3 14:53:32.575: INFO: Got endpoints: latency-svc-m5cgc [1.234639053s]
Feb  3 14:53:32.621: INFO: Created: latency-svc-dsgqx
Feb  3 14:53:32.664: INFO: Got endpoints: latency-svc-dsgqx [1.299407891s]
Feb  3 14:53:32.696: INFO: Created: latency-svc-6bcqw
Feb  3 14:53:32.729: INFO: Got endpoints: latency-svc-6bcqw [1.29351803s]
Feb  3 14:53:32.738: INFO: Created: latency-svc-mdqlk
Feb  3 14:53:32.739: INFO: Got endpoints: latency-svc-mdqlk [1.179810716s]
Feb  3 14:53:32.878: INFO: Created: latency-svc-fznrj
Feb  3 14:53:32.892: INFO: Got endpoints: latency-svc-fznrj [1.326288713s]
Feb  3 14:53:32.970: INFO: Created: latency-svc-7f2r4
Feb  3 14:53:33.077: INFO: Got endpoints: latency-svc-7f2r4 [1.373113055s]
Feb  3 14:53:33.127: INFO: Created: latency-svc-s24kq
Feb  3 14:53:33.134: INFO: Got endpoints: latency-svc-s24kq [1.366934615s]
Feb  3 14:53:33.164: INFO: Created: latency-svc-vjhsk
Feb  3 14:53:33.235: INFO: Got endpoints: latency-svc-vjhsk [1.290704486s]
Feb  3 14:53:33.248: INFO: Created: latency-svc-5w8x2
Feb  3 14:53:33.262: INFO: Got endpoints: latency-svc-5w8x2 [1.179730962s]
Feb  3 14:53:33.295: INFO: Created: latency-svc-5vrjx
Feb  3 14:53:33.303: INFO: Got endpoints: latency-svc-5vrjx [1.185461801s]
Feb  3 14:53:33.396: INFO: Created: latency-svc-f7qc8
Feb  3 14:53:33.398: INFO: Got endpoints: latency-svc-f7qc8 [1.170766062s]
Feb  3 14:53:33.430: INFO: Created: latency-svc-4h8bb
Feb  3 14:53:33.439: INFO: Got endpoints: latency-svc-4h8bb [1.164437598s]
Feb  3 14:53:33.470: INFO: Created: latency-svc-dj9sk
Feb  3 14:53:33.477: INFO: Got endpoints: latency-svc-dj9sk [1.099076115s]
Feb  3 14:53:33.614: INFO: Created: latency-svc-sqmjj
Feb  3 14:53:33.669: INFO: Got endpoints: latency-svc-sqmjj [1.240535667s]
Feb  3 14:53:33.790: INFO: Created: latency-svc-xzlr5
Feb  3 14:53:33.800: INFO: Got endpoints: latency-svc-xzlr5 [1.328889787s]
Feb  3 14:53:33.853: INFO: Created: latency-svc-rdqmm
Feb  3 14:53:33.935: INFO: Got endpoints: latency-svc-rdqmm [1.360205874s]
Feb  3 14:53:33.938: INFO: Created: latency-svc-8hxhf
Feb  3 14:53:33.992: INFO: Got endpoints: latency-svc-8hxhf [1.327724838s]
Feb  3 14:53:33.999: INFO: Created: latency-svc-7pnnp
Feb  3 14:53:34.024: INFO: Got endpoints: latency-svc-7pnnp [1.294611534s]
Feb  3 14:53:34.092: INFO: Created: latency-svc-br9lf
Feb  3 14:53:34.113: INFO: Got endpoints: latency-svc-br9lf [1.373539947s]
Feb  3 14:53:34.142: INFO: Created: latency-svc-q49xd
Feb  3 14:53:34.173: INFO: Got endpoints: latency-svc-q49xd [1.281693724s]
Feb  3 14:53:34.176: INFO: Created: latency-svc-wc2lq
Feb  3 14:53:34.231: INFO: Got endpoints: latency-svc-wc2lq [1.153860598s]
Feb  3 14:53:34.246: INFO: Created: latency-svc-vzp8p
Feb  3 14:53:34.248: INFO: Got endpoints: latency-svc-vzp8p [1.114433229s]
Feb  3 14:53:34.302: INFO: Created: latency-svc-8hc8g
Feb  3 14:53:34.302: INFO: Got endpoints: latency-svc-8hc8g [1.067202255s]
Feb  3 14:53:34.397: INFO: Created: latency-svc-8tkvg
Feb  3 14:53:34.397: INFO: Got endpoints: latency-svc-8tkvg [1.134641858s]
Feb  3 14:53:34.451: INFO: Created: latency-svc-hnpkn
Feb  3 14:53:34.453: INFO: Got endpoints: latency-svc-hnpkn [1.14982917s]
Feb  3 14:53:34.551: INFO: Created: latency-svc-kg5k4
Feb  3 14:53:34.555: INFO: Got endpoints: latency-svc-kg5k4 [1.157361634s]
Feb  3 14:53:34.618: INFO: Created: latency-svc-whwcl
Feb  3 14:53:34.623: INFO: Got endpoints: latency-svc-whwcl [1.183417948s]
Feb  3 14:53:34.703: INFO: Created: latency-svc-wltvt
Feb  3 14:53:34.706: INFO: Got endpoints: latency-svc-wltvt [1.228526778s]
Feb  3 14:53:34.751: INFO: Created: latency-svc-57dxs
Feb  3 14:53:34.751: INFO: Got endpoints: latency-svc-57dxs [1.081155664s]
Feb  3 14:53:34.792: INFO: Created: latency-svc-m2wzd
Feb  3 14:53:34.842: INFO: Got endpoints: latency-svc-m2wzd [1.041601112s]
Feb  3 14:53:34.874: INFO: Created: latency-svc-2z89v
Feb  3 14:53:34.874: INFO: Got endpoints: latency-svc-2z89v [937.534692ms]
Feb  3 14:53:34.874: INFO: Latencies: [95.487198ms 139.660979ms 249.426585ms 294.171886ms 396.2056ms 425.399932ms 487.337092ms 589.661787ms 881.789816ms 910.301142ms 937.534692ms 1.041601112s 1.067202255s 1.081155664s 1.087165563s 1.096490354s 1.099076115s 1.113873165s 1.114433229s 1.132469263s 1.134641858s 1.13914173s 1.142396351s 1.145626832s 1.14982917s 1.153860598s 1.157361634s 1.160543141s 1.164437598s 1.170766062s 1.179730962s 1.179810716s 1.183417948s 1.184446694s 1.185461801s 1.191279081s 1.193193747s 1.195057291s 1.202287581s 1.20344047s 1.208346215s 1.213116094s 1.216305248s 1.21784743s 1.223465424s 1.227521502s 1.228330963s 1.228526778s 1.230561962s 1.233078732s 1.234639053s 1.239394216s 1.240535667s 1.245254541s 1.255915241s 1.259106546s 1.260834754s 1.261702282s 1.278854074s 1.281693724s 1.285923475s 1.28763729s 1.28994501s 1.290242655s 1.290704486s 1.29351803s 1.294611534s 1.295752319s 1.295961177s 1.296590567s 1.299407891s 1.299790102s 1.30210246s 1.305533815s 1.308580922s 1.309059938s 1.309727514s 1.310387985s 1.310799831s 1.312911625s 1.313051998s 1.315010596s 1.317381513s 1.320958098s 1.322036157s 1.326288713s 1.327724838s 1.328889787s 1.340588256s 1.341005158s 1.342365526s 1.345255833s 1.347494257s 1.349506345s 1.352359125s 1.352826651s 1.353218341s 1.354138648s 1.358984016s 1.360205874s 1.364432111s 1.365263077s 1.366934615s 1.369901683s 1.373113055s 1.373539947s 1.381155193s 1.381266211s 1.38152052s 1.385381494s 1.390053666s 1.400717879s 1.404084281s 1.40584811s 1.409367929s 1.414901219s 1.414990779s 1.415326872s 1.417815568s 1.417873098s 1.421950268s 1.428652378s 1.431590404s 1.43533419s 1.436870271s 1.43714719s 1.438623513s 1.442487634s 1.457299292s 1.459686932s 1.460431674s 1.464860623s 1.472983558s 1.473909088s 1.474109872s 1.48831698s 1.488621036s 1.494442241s 1.496894885s 1.505363674s 1.50572174s 1.506259039s 1.511762072s 1.512779305s 1.513927553s 1.515374825s 1.518015056s 1.53359953s 1.539314062s 1.554245507s 1.555428981s 1.575790768s 1.576714725s 1.579891055s 1.583269449s 1.583798278s 1.586693092s 1.587598649s 1.598875529s 1.59995994s 1.602184463s 1.602193937s 1.608662425s 1.609540118s 1.633892978s 1.634886897s 1.636512183s 1.638497724s 1.646041752s 1.647198986s 1.655670871s 1.659168643s 1.660609803s 1.681378531s 1.681917332s 1.682126626s 1.683625333s 1.687558025s 1.696425595s 1.712278482s 1.721182972s 1.730310053s 1.744630995s 1.753845057s 1.757418321s 1.760040972s 1.760596512s 1.764845526s 1.767081846s 1.787683889s 1.807136288s 1.808590184s 1.825241111s 1.826309635s 1.846903109s 1.858596344s 1.863741473s 1.898930831s 1.905189388s 1.934396011s]
Feb  3 14:53:34.874: INFO: 50 %ile: 1.364432111s
Feb  3 14:53:34.874: INFO: 90 %ile: 1.721182972s
Feb  3 14:53:34.874: INFO: 99 %ile: 1.905189388s
Feb  3 14:53:34.874: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 14:53:34.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svc-latency-1614" for this suite.
Feb  3 14:54:10.919: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 14:54:11.010: INFO: namespace svc-latency-1614 deletion completed in 36.128484732s

• [SLOW TEST:65.197 seconds]
[sig-network] Service endpoints latency
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 14:54:11.010: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-c5af7845-1019-43bf-ab51-b60bd2bf06cf
STEP: Creating a pod to test consume secrets
Feb  3 14:54:11.142: INFO: Waiting up to 5m0s for pod "pod-secrets-e7f59fe7-ac2b-49b6-872a-060a7a6f9319" in namespace "secrets-6590" to be "success or failure"
Feb  3 14:54:11.150: INFO: Pod "pod-secrets-e7f59fe7-ac2b-49b6-872a-060a7a6f9319": Phase="Pending", Reason="", readiness=false. Elapsed: 7.891462ms
Feb  3 14:54:13.158: INFO: Pod "pod-secrets-e7f59fe7-ac2b-49b6-872a-060a7a6f9319": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016120759s
Feb  3 14:54:15.170: INFO: Pod "pod-secrets-e7f59fe7-ac2b-49b6-872a-060a7a6f9319": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02743973s
Feb  3 14:54:17.183: INFO: Pod "pod-secrets-e7f59fe7-ac2b-49b6-872a-060a7a6f9319": Phase="Pending", Reason="", readiness=false. Elapsed: 6.040499159s
Feb  3 14:54:19.190: INFO: Pod "pod-secrets-e7f59fe7-ac2b-49b6-872a-060a7a6f9319": Phase="Pending", Reason="", readiness=false. Elapsed: 8.047595229s
Feb  3 14:54:21.195: INFO: Pod "pod-secrets-e7f59fe7-ac2b-49b6-872a-060a7a6f9319": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.053186209s
STEP: Saw pod success
Feb  3 14:54:21.195: INFO: Pod "pod-secrets-e7f59fe7-ac2b-49b6-872a-060a7a6f9319" satisfied condition "success or failure"
Feb  3 14:54:21.198: INFO: Trying to get logs from node iruya-node pod pod-secrets-e7f59fe7-ac2b-49b6-872a-060a7a6f9319 container secret-volume-test: 
STEP: delete the pod
Feb  3 14:54:21.341: INFO: Waiting for pod pod-secrets-e7f59fe7-ac2b-49b6-872a-060a7a6f9319 to disappear
Feb  3 14:54:21.376: INFO: Pod pod-secrets-e7f59fe7-ac2b-49b6-872a-060a7a6f9319 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 14:54:21.377: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-6590" for this suite.
Feb  3 14:54:27.454: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 14:54:27.580: INFO: namespace secrets-6590 deletion completed in 6.196786903s

• [SLOW TEST:16.570 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 14:54:27.580: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating Pod
STEP: Waiting for the pod running
STEP: Geting the pod
STEP: Reading file content from the nginx-container
Feb  3 14:54:37.759: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec pod-sharedvolume-b433cabf-f21e-4fd7-85a2-0daf036a2ce2 -c busybox-main-container --namespace=emptydir-7253 -- cat /usr/share/volumeshare/shareddata.txt'
Feb  3 14:54:38.227: INFO: stderr: "I0203 14:54:37.943179    3073 log.go:172] (0xc000a3c370) (0xc0008488c0) Create stream\nI0203 14:54:37.943303    3073 log.go:172] (0xc000a3c370) (0xc0008488c0) Stream added, broadcasting: 1\nI0203 14:54:37.951710    3073 log.go:172] (0xc000a3c370) Reply frame received for 1\nI0203 14:54:37.951786    3073 log.go:172] (0xc000a3c370) (0xc00064a1e0) Create stream\nI0203 14:54:37.951806    3073 log.go:172] (0xc000a3c370) (0xc00064a1e0) Stream added, broadcasting: 3\nI0203 14:54:37.955305    3073 log.go:172] (0xc000a3c370) Reply frame received for 3\nI0203 14:54:37.955338    3073 log.go:172] (0xc000a3c370) (0xc0006b8000) Create stream\nI0203 14:54:37.955347    3073 log.go:172] (0xc000a3c370) (0xc0006b8000) Stream added, broadcasting: 5\nI0203 14:54:37.957244    3073 log.go:172] (0xc000a3c370) Reply frame received for 5\nI0203 14:54:38.055929    3073 log.go:172] (0xc000a3c370) Data frame received for 3\nI0203 14:54:38.056043    3073 log.go:172] (0xc00064a1e0) (3) Data frame handling\nI0203 14:54:38.056121    3073 log.go:172] (0xc00064a1e0) (3) Data frame sent\nI0203 14:54:38.212841    3073 log.go:172] (0xc000a3c370) Data frame received for 1\nI0203 14:54:38.213275    3073 log.go:172] (0xc000a3c370) (0xc0006b8000) Stream removed, broadcasting: 5\nI0203 14:54:38.213382    3073 log.go:172] (0xc0008488c0) (1) Data frame handling\nI0203 14:54:38.213452    3073 log.go:172] (0xc0008488c0) (1) Data frame sent\nI0203 14:54:38.213532    3073 log.go:172] (0xc000a3c370) (0xc00064a1e0) Stream removed, broadcasting: 3\nI0203 14:54:38.213605    3073 log.go:172] (0xc000a3c370) (0xc0008488c0) Stream removed, broadcasting: 1\nI0203 14:54:38.213643    3073 log.go:172] (0xc000a3c370) Go away received\nI0203 14:54:38.214764    3073 log.go:172] (0xc000a3c370) (0xc0008488c0) Stream removed, broadcasting: 1\nI0203 14:54:38.214798    3073 log.go:172] (0xc000a3c370) (0xc00064a1e0) Stream removed, broadcasting: 3\nI0203 14:54:38.214813    3073 log.go:172] (0xc000a3c370) (0xc0006b8000) Stream removed, broadcasting: 5\n"
Feb  3 14:54:38.227: INFO: stdout: "Hello from the busy-box sub-container\n"
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 14:54:38.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7253" for this suite.
Feb  3 14:54:44.260: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 14:54:44.368: INFO: namespace emptydir-7253 deletion completed in 6.132687674s

• [SLOW TEST:16.787 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 14:54:44.369: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod busybox-73824adc-3fac-45cb-8517-83cd629b5bcf in namespace container-probe-7878
Feb  3 14:54:56.555: INFO: Started pod busybox-73824adc-3fac-45cb-8517-83cd629b5bcf in namespace container-probe-7878
STEP: checking the pod's current state and verifying that restartCount is present
Feb  3 14:54:56.561: INFO: Initial restart count of pod busybox-73824adc-3fac-45cb-8517-83cd629b5bcf is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 14:58:58.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-7878" for this suite.
Feb  3 14:59:04.385: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 14:59:04.530: INFO: namespace container-probe-7878 deletion completed in 6.188922043s

• [SLOW TEST:260.162 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 14:59:04.532: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a replication controller
Feb  3 14:59:04.673: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3655'
Feb  3 14:59:05.204: INFO: stderr: ""
Feb  3 14:59:05.204: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb  3 14:59:05.205: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3655'
Feb  3 14:59:05.522: INFO: stderr: ""
Feb  3 14:59:05.522: INFO: stdout: "update-demo-nautilus-54z74 update-demo-nautilus-vbhdt "
Feb  3 14:59:05.522: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-54z74 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3655'
Feb  3 14:59:05.639: INFO: stderr: ""
Feb  3 14:59:05.639: INFO: stdout: ""
Feb  3 14:59:05.639: INFO: update-demo-nautilus-54z74 is created but not running
Feb  3 14:59:10.641: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3655'
Feb  3 14:59:10.784: INFO: stderr: ""
Feb  3 14:59:10.784: INFO: stdout: "update-demo-nautilus-54z74 update-demo-nautilus-vbhdt "
Feb  3 14:59:10.784: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-54z74 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3655'
Feb  3 14:59:10.967: INFO: stderr: ""
Feb  3 14:59:10.967: INFO: stdout: ""
Feb  3 14:59:10.967: INFO: update-demo-nautilus-54z74 is created but not running
Feb  3 14:59:15.968: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3655'
Feb  3 14:59:16.141: INFO: stderr: ""
Feb  3 14:59:16.141: INFO: stdout: "update-demo-nautilus-54z74 update-demo-nautilus-vbhdt "
Feb  3 14:59:16.142: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-54z74 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3655'
Feb  3 14:59:16.325: INFO: stderr: ""
Feb  3 14:59:16.325: INFO: stdout: "true"
Feb  3 14:59:16.325: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-54z74 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3655'
Feb  3 14:59:16.423: INFO: stderr: ""
Feb  3 14:59:16.423: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  3 14:59:16.424: INFO: validating pod update-demo-nautilus-54z74
Feb  3 14:59:16.449: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  3 14:59:16.450: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  3 14:59:16.450: INFO: update-demo-nautilus-54z74 is verified up and running
Feb  3 14:59:16.450: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vbhdt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3655'
Feb  3 14:59:16.560: INFO: stderr: ""
Feb  3 14:59:16.560: INFO: stdout: ""
Feb  3 14:59:16.560: INFO: update-demo-nautilus-vbhdt is created but not running
Feb  3 14:59:21.561: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3655'
Feb  3 14:59:21.728: INFO: stderr: ""
Feb  3 14:59:21.728: INFO: stdout: "update-demo-nautilus-54z74 update-demo-nautilus-vbhdt "
Feb  3 14:59:21.728: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-54z74 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3655'
Feb  3 14:59:21.878: INFO: stderr: ""
Feb  3 14:59:21.879: INFO: stdout: "true"
Feb  3 14:59:21.879: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-54z74 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3655'
Feb  3 14:59:22.023: INFO: stderr: ""
Feb  3 14:59:22.023: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  3 14:59:22.023: INFO: validating pod update-demo-nautilus-54z74
Feb  3 14:59:22.030: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  3 14:59:22.030: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  3 14:59:22.030: INFO: update-demo-nautilus-54z74 is verified up and running
Feb  3 14:59:22.030: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vbhdt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3655'
Feb  3 14:59:22.200: INFO: stderr: ""
Feb  3 14:59:22.201: INFO: stdout: "true"
Feb  3 14:59:22.201: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vbhdt -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3655'
Feb  3 14:59:22.377: INFO: stderr: ""
Feb  3 14:59:22.377: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  3 14:59:22.377: INFO: validating pod update-demo-nautilus-vbhdt
Feb  3 14:59:22.404: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  3 14:59:22.404: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  3 14:59:22.404: INFO: update-demo-nautilus-vbhdt is verified up and running
STEP: scaling down the replication controller
Feb  3 14:59:22.406: INFO: scanned /root for discovery docs: 
Feb  3 14:59:22.406: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-3655'
Feb  3 14:59:23.867: INFO: stderr: ""
Feb  3 14:59:23.867: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb  3 14:59:23.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3655'
Feb  3 14:59:24.139: INFO: stderr: ""
Feb  3 14:59:24.139: INFO: stdout: "update-demo-nautilus-54z74 update-demo-nautilus-vbhdt "
STEP: Replicas for name=update-demo: expected=1 actual=2
Feb  3 14:59:29.140: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3655'
Feb  3 14:59:29.346: INFO: stderr: ""
Feb  3 14:59:29.346: INFO: stdout: "update-demo-nautilus-54z74 "
Feb  3 14:59:29.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-54z74 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3655'
Feb  3 14:59:29.457: INFO: stderr: ""
Feb  3 14:59:29.457: INFO: stdout: "true"
Feb  3 14:59:29.458: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-54z74 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3655'
Feb  3 14:59:29.583: INFO: stderr: ""
Feb  3 14:59:29.583: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  3 14:59:29.583: INFO: validating pod update-demo-nautilus-54z74
Feb  3 14:59:29.593: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  3 14:59:29.593: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  3 14:59:29.593: INFO: update-demo-nautilus-54z74 is verified up and running
STEP: scaling up the replication controller
Feb  3 14:59:29.596: INFO: scanned /root for discovery docs: 
Feb  3 14:59:29.596: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-3655'
Feb  3 14:59:31.012: INFO: stderr: ""
Feb  3 14:59:31.012: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb  3 14:59:31.013: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3655'
Feb  3 14:59:31.144: INFO: stderr: ""
Feb  3 14:59:31.144: INFO: stdout: "update-demo-nautilus-54z74 update-demo-nautilus-hl9p6 "
Feb  3 14:59:31.145: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-54z74 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3655'
Feb  3 14:59:31.283: INFO: stderr: ""
Feb  3 14:59:31.283: INFO: stdout: "true"
Feb  3 14:59:31.283: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-54z74 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3655'
Feb  3 14:59:31.361: INFO: stderr: ""
Feb  3 14:59:31.361: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  3 14:59:31.361: INFO: validating pod update-demo-nautilus-54z74
Feb  3 14:59:31.369: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  3 14:59:31.369: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  3 14:59:31.369: INFO: update-demo-nautilus-54z74 is verified up and running
Feb  3 14:59:31.369: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hl9p6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3655'
Feb  3 14:59:31.490: INFO: stderr: ""
Feb  3 14:59:31.490: INFO: stdout: ""
Feb  3 14:59:31.490: INFO: update-demo-nautilus-hl9p6 is created but not running
Feb  3 14:59:36.490: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3655'
Feb  3 14:59:36.706: INFO: stderr: ""
Feb  3 14:59:36.706: INFO: stdout: "update-demo-nautilus-54z74 update-demo-nautilus-hl9p6 "
Feb  3 14:59:36.706: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-54z74 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3655'
Feb  3 14:59:36.886: INFO: stderr: ""
Feb  3 14:59:36.886: INFO: stdout: "true"
Feb  3 14:59:36.887: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-54z74 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3655'
Feb  3 14:59:37.026: INFO: stderr: ""
Feb  3 14:59:37.027: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  3 14:59:37.027: INFO: validating pod update-demo-nautilus-54z74
Feb  3 14:59:37.034: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  3 14:59:37.034: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  3 14:59:37.034: INFO: update-demo-nautilus-54z74 is verified up and running
Feb  3 14:59:37.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hl9p6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3655'
Feb  3 14:59:37.133: INFO: stderr: ""
Feb  3 14:59:37.134: INFO: stdout: ""
Feb  3 14:59:37.134: INFO: update-demo-nautilus-hl9p6 is created but not running
Feb  3 14:59:42.134: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3655'
Feb  3 14:59:42.412: INFO: stderr: ""
Feb  3 14:59:42.413: INFO: stdout: "update-demo-nautilus-54z74 update-demo-nautilus-hl9p6 "
Feb  3 14:59:42.414: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-54z74 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3655'
Feb  3 14:59:42.551: INFO: stderr: ""
Feb  3 14:59:42.551: INFO: stdout: "true"
Feb  3 14:59:42.552: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-54z74 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3655'
Feb  3 14:59:42.704: INFO: stderr: ""
Feb  3 14:59:42.704: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  3 14:59:42.704: INFO: validating pod update-demo-nautilus-54z74
Feb  3 14:59:42.716: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  3 14:59:42.717: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  3 14:59:42.717: INFO: update-demo-nautilus-54z74 is verified up and running
Feb  3 14:59:42.717: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hl9p6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3655'
Feb  3 14:59:42.841: INFO: stderr: ""
Feb  3 14:59:42.841: INFO: stdout: "true"
Feb  3 14:59:42.841: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hl9p6 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3655'
Feb  3 14:59:43.056: INFO: stderr: ""
Feb  3 14:59:43.056: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  3 14:59:43.056: INFO: validating pod update-demo-nautilus-hl9p6
Feb  3 14:59:43.066: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  3 14:59:43.066: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  3 14:59:43.066: INFO: update-demo-nautilus-hl9p6 is verified up and running
STEP: using delete to clean up resources
Feb  3 14:59:43.066: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3655'
Feb  3 14:59:43.160: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  3 14:59:43.160: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Feb  3 14:59:43.160: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-3655'
Feb  3 14:59:43.366: INFO: stderr: "No resources found.\n"
Feb  3 14:59:43.366: INFO: stdout: ""
Feb  3 14:59:43.367: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-3655 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb  3 14:59:43.510: INFO: stderr: ""
Feb  3 14:59:43.510: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 14:59:43.511: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3655" for this suite.
Feb  3 15:00:05.556: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 15:00:05.706: INFO: namespace kubectl-3655 deletion completed in 22.188320426s

• [SLOW TEST:61.174 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should scale a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 15:00:05.707: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Feb  3 15:00:05.848: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 15:00:18.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-157" for this suite.
Feb  3 15:00:24.792: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 15:00:24.942: INFO: namespace init-container-157 deletion completed in 6.195136875s

• [SLOW TEST:19.236 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 15:00:24.943: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Feb  3 15:00:25.064: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Feb  3 15:00:25.079: INFO: Waiting for terminating namespaces to be deleted...
Feb  3 15:00:25.082: INFO: 
Logging pods the kubelet thinks is on node iruya-node before test
Feb  3 15:00:25.104: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded)
Feb  3 15:00:25.104: INFO: 	Container weave ready: true, restart count 0
Feb  3 15:00:25.104: INFO: 	Container weave-npc ready: true, restart count 0
Feb  3 15:00:25.104: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded)
Feb  3 15:00:25.104: INFO: 	Container kube-proxy ready: true, restart count 0
Feb  3 15:00:25.104: INFO: 
Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test
Feb  3 15:00:25.117: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Feb  3 15:00:25.117: INFO: 	Container coredns ready: true, restart count 0
Feb  3 15:00:25.117: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded)
Feb  3 15:00:25.117: INFO: 	Container etcd ready: true, restart count 0
Feb  3 15:00:25.117: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded)
Feb  3 15:00:25.117: INFO: 	Container weave ready: true, restart count 0
Feb  3 15:00:25.117: INFO: 	Container weave-npc ready: true, restart count 0
Feb  3 15:00:25.117: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded)
Feb  3 15:00:25.117: INFO: 	Container kube-controller-manager ready: true, restart count 19
Feb  3 15:00:25.117: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded)
Feb  3 15:00:25.117: INFO: 	Container kube-proxy ready: true, restart count 0
Feb  3 15:00:25.117: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded)
Feb  3 15:00:25.117: INFO: 	Container kube-apiserver ready: true, restart count 0
Feb  3 15:00:25.117: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded)
Feb  3 15:00:25.117: INFO: 	Container kube-scheduler ready: true, restart count 13
Feb  3 15:00:25.117: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Feb  3 15:00:25.117: INFO: 	Container coredns ready: true, restart count 0
[It] validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-1d0bb3e6-ccee-4cde-84f1-68c9c4b23001 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-1d0bb3e6-ccee-4cde-84f1-68c9c4b23001 off the node iruya-node
STEP: verifying the node doesn't have the label kubernetes.io/e2e-1d0bb3e6-ccee-4cde-84f1-68c9c4b23001
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 15:00:45.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-7003" for this suite.
Feb  3 15:01:15.426: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 15:01:15.552: INFO: namespace sched-pred-7003 deletion completed in 30.16834559s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:50.609 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 15:01:15.553: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: getting the auto-created API token
Feb  3 15:01:16.203: INFO: created pod pod-service-account-defaultsa
Feb  3 15:01:16.203: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Feb  3 15:01:16.216: INFO: created pod pod-service-account-mountsa
Feb  3 15:01:16.216: INFO: pod pod-service-account-mountsa service account token volume mount: true
Feb  3 15:01:16.291: INFO: created pod pod-service-account-nomountsa
Feb  3 15:01:16.291: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Feb  3 15:01:16.308: INFO: created pod pod-service-account-defaultsa-mountspec
Feb  3 15:01:16.308: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Feb  3 15:01:16.452: INFO: created pod pod-service-account-mountsa-mountspec
Feb  3 15:01:16.452: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Feb  3 15:01:16.472: INFO: created pod pod-service-account-nomountsa-mountspec
Feb  3 15:01:16.472: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Feb  3 15:01:17.047: INFO: created pod pod-service-account-defaultsa-nomountspec
Feb  3 15:01:17.047: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Feb  3 15:01:18.237: INFO: created pod pod-service-account-mountsa-nomountspec
Feb  3 15:01:18.237: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Feb  3 15:01:18.268: INFO: created pod pod-service-account-nomountsa-nomountspec
Feb  3 15:01:18.268: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 15:01:18.268: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-5964" for this suite.
Feb  3 15:02:40.414: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 15:02:40.572: INFO: namespace svcaccounts-5964 deletion completed in 1m21.875032108s

• [SLOW TEST:85.020 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 15:02:40.573: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb  3 15:02:40.788: INFO: Waiting up to 5m0s for pod "downwardapi-volume-dee8bd23-b8aa-43ef-9cb2-34598bd6b111" in namespace "projected-2822" to be "success or failure"
Feb  3 15:02:40.827: INFO: Pod "downwardapi-volume-dee8bd23-b8aa-43ef-9cb2-34598bd6b111": Phase="Pending", Reason="", readiness=false. Elapsed: 39.079256ms
Feb  3 15:02:42.839: INFO: Pod "downwardapi-volume-dee8bd23-b8aa-43ef-9cb2-34598bd6b111": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051177353s
Feb  3 15:02:44.865: INFO: Pod "downwardapi-volume-dee8bd23-b8aa-43ef-9cb2-34598bd6b111": Phase="Pending", Reason="", readiness=false. Elapsed: 4.077316749s
Feb  3 15:02:46.883: INFO: Pod "downwardapi-volume-dee8bd23-b8aa-43ef-9cb2-34598bd6b111": Phase="Pending", Reason="", readiness=false. Elapsed: 6.095134564s
Feb  3 15:02:48.906: INFO: Pod "downwardapi-volume-dee8bd23-b8aa-43ef-9cb2-34598bd6b111": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.117604419s
STEP: Saw pod success
Feb  3 15:02:48.906: INFO: Pod "downwardapi-volume-dee8bd23-b8aa-43ef-9cb2-34598bd6b111" satisfied condition "success or failure"
Feb  3 15:02:48.914: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-dee8bd23-b8aa-43ef-9cb2-34598bd6b111 container client-container: 
STEP: delete the pod
Feb  3 15:02:48.984: INFO: Waiting for pod downwardapi-volume-dee8bd23-b8aa-43ef-9cb2-34598bd6b111 to disappear
Feb  3 15:02:48.995: INFO: Pod downwardapi-volume-dee8bd23-b8aa-43ef-9cb2-34598bd6b111 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 15:02:48.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2822" for this suite.
Feb  3 15:02:55.035: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 15:02:55.162: INFO: namespace projected-2822 deletion completed in 6.158289652s

• [SLOW TEST:14.589 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 15:02:55.163: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-7457
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a new StatefulSet
Feb  3 15:02:55.411: INFO: Found 0 stateful pods, waiting for 3
Feb  3 15:03:05.425: INFO: Found 2 stateful pods, waiting for 3
Feb  3 15:03:15.427: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb  3 15:03:15.427: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb  3 15:03:15.427: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb  3 15:03:25.428: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb  3 15:03:25.428: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb  3 15:03:25.428: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Feb  3 15:03:25.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7457 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb  3 15:03:27.941: INFO: stderr: "I0203 15:03:27.574251    3793 log.go:172] (0xc000352370) (0xc0005d4820) Create stream\nI0203 15:03:27.574337    3793 log.go:172] (0xc000352370) (0xc0005d4820) Stream added, broadcasting: 1\nI0203 15:03:27.578297    3793 log.go:172] (0xc000352370) Reply frame received for 1\nI0203 15:03:27.578437    3793 log.go:172] (0xc000352370) (0xc0006ec0a0) Create stream\nI0203 15:03:27.578460    3793 log.go:172] (0xc000352370) (0xc0006ec0a0) Stream added, broadcasting: 3\nI0203 15:03:27.580024    3793 log.go:172] (0xc000352370) Reply frame received for 3\nI0203 15:03:27.580113    3793 log.go:172] (0xc000352370) (0xc0006ec140) Create stream\nI0203 15:03:27.580142    3793 log.go:172] (0xc000352370) (0xc0006ec140) Stream added, broadcasting: 5\nI0203 15:03:27.583829    3793 log.go:172] (0xc000352370) Reply frame received for 5\nI0203 15:03:27.756888    3793 log.go:172] (0xc000352370) Data frame received for 5\nI0203 15:03:27.756957    3793 log.go:172] (0xc0006ec140) (5) Data frame handling\nI0203 15:03:27.756986    3793 log.go:172] (0xc0006ec140) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0203 15:03:27.816685    3793 log.go:172] (0xc000352370) Data frame received for 3\nI0203 15:03:27.816907    3793 log.go:172] (0xc0006ec0a0) (3) Data frame handling\nI0203 15:03:27.816945    3793 log.go:172] (0xc0006ec0a0) (3) Data frame sent\nI0203 15:03:27.930078    3793 log.go:172] (0xc000352370) Data frame received for 1\nI0203 15:03:27.930348    3793 log.go:172] (0xc0005d4820) (1) Data frame handling\nI0203 15:03:27.930406    3793 log.go:172] (0xc0005d4820) (1) Data frame sent\nI0203 15:03:27.930529    3793 log.go:172] (0xc000352370) (0xc0005d4820) Stream removed, broadcasting: 1\nI0203 15:03:27.931134    3793 log.go:172] (0xc000352370) (0xc0006ec0a0) Stream removed, broadcasting: 3\nI0203 15:03:27.931622    3793 log.go:172] (0xc000352370) (0xc0006ec140) Stream removed, broadcasting: 5\nI0203 15:03:27.931897    3793 log.go:172] (0xc000352370) Go away received\nI0203 15:03:27.931951    3793 log.go:172] (0xc000352370) (0xc0005d4820) Stream removed, broadcasting: 1\nI0203 15:03:27.931981    3793 log.go:172] (0xc000352370) (0xc0006ec0a0) Stream removed, broadcasting: 3\nI0203 15:03:27.931993    3793 log.go:172] (0xc000352370) (0xc0006ec140) Stream removed, broadcasting: 5\n"
Feb  3 15:03:27.942: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb  3 15:03:27.942: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Feb  3 15:03:37.993: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Feb  3 15:03:48.114: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7457 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  3 15:03:48.631: INFO: stderr: "I0203 15:03:48.335609    3821 log.go:172] (0xc000a36420) (0xc0009f28c0) Create stream\nI0203 15:03:48.335762    3821 log.go:172] (0xc000a36420) (0xc0009f28c0) Stream added, broadcasting: 1\nI0203 15:03:48.346396    3821 log.go:172] (0xc000a36420) Reply frame received for 1\nI0203 15:03:48.346477    3821 log.go:172] (0xc000a36420) (0xc0009f2000) Create stream\nI0203 15:03:48.346494    3821 log.go:172] (0xc000a36420) (0xc0009f2000) Stream added, broadcasting: 3\nI0203 15:03:48.348652    3821 log.go:172] (0xc000a36420) Reply frame received for 3\nI0203 15:03:48.348687    3821 log.go:172] (0xc000a36420) (0xc0009f20a0) Create stream\nI0203 15:03:48.348698    3821 log.go:172] (0xc000a36420) (0xc0009f20a0) Stream added, broadcasting: 5\nI0203 15:03:48.349895    3821 log.go:172] (0xc000a36420) Reply frame received for 5\nI0203 15:03:48.466279    3821 log.go:172] (0xc000a36420) Data frame received for 5\nI0203 15:03:48.466403    3821 log.go:172] (0xc0009f20a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0203 15:03:48.466452    3821 log.go:172] (0xc000a36420) Data frame received for 3\nI0203 15:03:48.466499    3821 log.go:172] (0xc0009f2000) (3) Data frame handling\nI0203 15:03:48.466534    3821 log.go:172] (0xc0009f2000) (3) Data frame sent\nI0203 15:03:48.466681    3821 log.go:172] (0xc0009f20a0) (5) Data frame sent\nI0203 15:03:48.602330    3821 log.go:172] (0xc000a36420) Data frame received for 1\nI0203 15:03:48.602475    3821 log.go:172] (0xc000a36420) (0xc0009f2000) Stream removed, broadcasting: 3\nI0203 15:03:48.602605    3821 log.go:172] (0xc0009f28c0) (1) Data frame handling\nI0203 15:03:48.602646    3821 log.go:172] (0xc0009f28c0) (1) Data frame sent\nI0203 15:03:48.602740    3821 log.go:172] (0xc000a36420) (0xc0009f20a0) Stream removed, broadcasting: 5\nI0203 15:03:48.602772    3821 log.go:172] (0xc000a36420) (0xc0009f28c0) Stream removed, broadcasting: 1\nI0203 15:03:48.602828    3821 log.go:172] (0xc000a36420) Go away received\nI0203 15:03:48.605190    3821 log.go:172] (0xc000a36420) (0xc0009f28c0) Stream removed, broadcasting: 1\nI0203 15:03:48.605211    3821 log.go:172] (0xc000a36420) (0xc0009f2000) Stream removed, broadcasting: 3\nI0203 15:03:48.605225    3821 log.go:172] (0xc000a36420) (0xc0009f20a0) Stream removed, broadcasting: 5\n"
Feb  3 15:03:48.632: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb  3 15:03:48.632: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb  3 15:03:58.683: INFO: Waiting for StatefulSet statefulset-7457/ss2 to complete update
Feb  3 15:03:58.684: INFO: Waiting for Pod statefulset-7457/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  3 15:03:58.684: INFO: Waiting for Pod statefulset-7457/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  3 15:04:08.698: INFO: Waiting for StatefulSet statefulset-7457/ss2 to complete update
Feb  3 15:04:08.699: INFO: Waiting for Pod statefulset-7457/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  3 15:04:08.699: INFO: Waiting for Pod statefulset-7457/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  3 15:04:18.710: INFO: Waiting for StatefulSet statefulset-7457/ss2 to complete update
Feb  3 15:04:18.711: INFO: Waiting for Pod statefulset-7457/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  3 15:04:28.698: INFO: Waiting for StatefulSet statefulset-7457/ss2 to complete update
Feb  3 15:04:28.698: INFO: Waiting for Pod statefulset-7457/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  3 15:04:38.698: INFO: Waiting for StatefulSet statefulset-7457/ss2 to complete update
STEP: Rolling back to a previous revision
Feb  3 15:04:48.705: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7457 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb  3 15:04:49.165: INFO: stderr: "I0203 15:04:48.875667    3841 log.go:172] (0xc00055a210) (0xc000548640) Create stream\nI0203 15:04:48.875780    3841 log.go:172] (0xc00055a210) (0xc000548640) Stream added, broadcasting: 1\nI0203 15:04:48.881197    3841 log.go:172] (0xc00055a210) Reply frame received for 1\nI0203 15:04:48.881272    3841 log.go:172] (0xc00055a210) (0xc0005a4320) Create stream\nI0203 15:04:48.881282    3841 log.go:172] (0xc00055a210) (0xc0005a4320) Stream added, broadcasting: 3\nI0203 15:04:48.883043    3841 log.go:172] (0xc00055a210) Reply frame received for 3\nI0203 15:04:48.883082    3841 log.go:172] (0xc00055a210) (0xc0005486e0) Create stream\nI0203 15:04:48.883110    3841 log.go:172] (0xc00055a210) (0xc0005486e0) Stream added, broadcasting: 5\nI0203 15:04:48.884259    3841 log.go:172] (0xc00055a210) Reply frame received for 5\nI0203 15:04:49.033810    3841 log.go:172] (0xc00055a210) Data frame received for 5\nI0203 15:04:49.033851    3841 log.go:172] (0xc0005486e0) (5) Data frame handling\nI0203 15:04:49.033875    3841 log.go:172] (0xc0005486e0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0203 15:04:49.081834    3841 log.go:172] (0xc00055a210) Data frame received for 3\nI0203 15:04:49.081862    3841 log.go:172] (0xc0005a4320) (3) Data frame handling\nI0203 15:04:49.081892    3841 log.go:172] (0xc0005a4320) (3) Data frame sent\nI0203 15:04:49.157470    3841 log.go:172] (0xc00055a210) (0xc0005a4320) Stream removed, broadcasting: 3\nI0203 15:04:49.157513    3841 log.go:172] (0xc00055a210) Data frame received for 1\nI0203 15:04:49.157537    3841 log.go:172] (0xc000548640) (1) Data frame handling\nI0203 15:04:49.157601    3841 log.go:172] (0xc00055a210) (0xc0005486e0) Stream removed, broadcasting: 5\nI0203 15:04:49.157664    3841 log.go:172] (0xc000548640) (1) Data frame sent\nI0203 15:04:49.157713    3841 log.go:172] (0xc00055a210) (0xc000548640) Stream removed, broadcasting: 1\nI0203 15:04:49.157786    3841 log.go:172] (0xc00055a210) Go away received\nI0203 15:04:49.158422    3841 log.go:172] (0xc00055a210) (0xc000548640) Stream removed, broadcasting: 1\nI0203 15:04:49.158443    3841 log.go:172] (0xc00055a210) (0xc0005a4320) Stream removed, broadcasting: 3\nI0203 15:04:49.158453    3841 log.go:172] (0xc00055a210) (0xc0005486e0) Stream removed, broadcasting: 5\n"
Feb  3 15:04:49.166: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb  3 15:04:49.166: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb  3 15:04:59.366: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Feb  3 15:05:09.434: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7457 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  3 15:05:09.838: INFO: stderr: "I0203 15:05:09.657941    3858 log.go:172] (0xc000116d10) (0xc0005d88c0) Create stream\nI0203 15:05:09.658195    3858 log.go:172] (0xc000116d10) (0xc0005d88c0) Stream added, broadcasting: 1\nI0203 15:05:09.665445    3858 log.go:172] (0xc000116d10) Reply frame received for 1\nI0203 15:05:09.665485    3858 log.go:172] (0xc000116d10) (0xc00079e000) Create stream\nI0203 15:05:09.665495    3858 log.go:172] (0xc000116d10) (0xc00079e000) Stream added, broadcasting: 3\nI0203 15:05:09.666929    3858 log.go:172] (0xc000116d10) Reply frame received for 3\nI0203 15:05:09.666961    3858 log.go:172] (0xc000116d10) (0xc00061a000) Create stream\nI0203 15:05:09.666968    3858 log.go:172] (0xc000116d10) (0xc00061a000) Stream added, broadcasting: 5\nI0203 15:05:09.668750    3858 log.go:172] (0xc000116d10) Reply frame received for 5\nI0203 15:05:09.753920    3858 log.go:172] (0xc000116d10) Data frame received for 3\nI0203 15:05:09.754128    3858 log.go:172] (0xc00079e000) (3) Data frame handling\nI0203 15:05:09.754179    3858 log.go:172] (0xc00079e000) (3) Data frame sent\nI0203 15:05:09.754574    3858 log.go:172] (0xc000116d10) Data frame received for 5\nI0203 15:05:09.754595    3858 log.go:172] (0xc00061a000) (5) Data frame handling\nI0203 15:05:09.754621    3858 log.go:172] (0xc00061a000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0203 15:05:09.828225    3858 log.go:172] (0xc000116d10) (0xc00079e000) Stream removed, broadcasting: 3\nI0203 15:05:09.828457    3858 log.go:172] (0xc000116d10) (0xc00061a000) Stream removed, broadcasting: 5\nI0203 15:05:09.828569    3858 log.go:172] (0xc000116d10) Data frame received for 1\nI0203 15:05:09.828592    3858 log.go:172] (0xc0005d88c0) (1) Data frame handling\nI0203 15:05:09.828609    3858 log.go:172] (0xc0005d88c0) (1) Data frame sent\nI0203 15:05:09.828631    3858 log.go:172] (0xc000116d10) (0xc0005d88c0) Stream removed, broadcasting: 1\nI0203 15:05:09.828644    3858 log.go:172] (0xc000116d10) Go away received\nI0203 15:05:09.829688    3858 log.go:172] (0xc000116d10) (0xc0005d88c0) Stream removed, broadcasting: 1\nI0203 15:05:09.829706    3858 log.go:172] (0xc000116d10) (0xc00079e000) Stream removed, broadcasting: 3\nI0203 15:05:09.829712    3858 log.go:172] (0xc000116d10) (0xc00061a000) Stream removed, broadcasting: 5\n"
Feb  3 15:05:09.838: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb  3 15:05:09.838: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb  3 15:05:19.904: INFO: Waiting for StatefulSet statefulset-7457/ss2 to complete update
Feb  3 15:05:19.905: INFO: Waiting for Pod statefulset-7457/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb  3 15:05:19.905: INFO: Waiting for Pod statefulset-7457/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb  3 15:05:19.905: INFO: Waiting for Pod statefulset-7457/ss2-2 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb  3 15:05:29.927: INFO: Waiting for StatefulSet statefulset-7457/ss2 to complete update
Feb  3 15:05:29.927: INFO: Waiting for Pod statefulset-7457/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb  3 15:05:29.927: INFO: Waiting for Pod statefulset-7457/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb  3 15:05:39.930: INFO: Waiting for StatefulSet statefulset-7457/ss2 to complete update
Feb  3 15:05:39.930: INFO: Waiting for Pod statefulset-7457/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb  3 15:05:39.930: INFO: Waiting for Pod statefulset-7457/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb  3 15:05:50.071: INFO: Waiting for StatefulSet statefulset-7457/ss2 to complete update
Feb  3 15:05:50.071: INFO: Waiting for Pod statefulset-7457/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb  3 15:05:59.937: INFO: Waiting for StatefulSet statefulset-7457/ss2 to complete update
Feb  3 15:05:59.937: INFO: Waiting for Pod statefulset-7457/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb  3 15:06:09.976: INFO: Waiting for StatefulSet statefulset-7457/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Feb  3 15:06:20.008: INFO: Deleting all statefulset in ns statefulset-7457
Feb  3 15:06:20.023: INFO: Scaling statefulset ss2 to 0
Feb  3 15:06:50.070: INFO: Waiting for statefulset status.replicas updated to 0
Feb  3 15:06:50.078: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 15:06:50.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-7457" for this suite.
Feb  3 15:06:59.205: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 15:06:59.395: INFO: namespace statefulset-7457 deletion completed in 9.288177206s

• [SLOW TEST:244.232 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should perform rolling updates and roll backs of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 15:06:59.395: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Feb  3 15:06:59.534: INFO: Number of nodes with available pods: 0
Feb  3 15:06:59.534: INFO: Node iruya-node is running more than one daemon pod
Feb  3 15:07:00.557: INFO: Number of nodes with available pods: 0
Feb  3 15:07:00.557: INFO: Node iruya-node is running more than one daemon pod
Feb  3 15:07:01.694: INFO: Number of nodes with available pods: 0
Feb  3 15:07:01.694: INFO: Node iruya-node is running more than one daemon pod
Feb  3 15:07:02.558: INFO: Number of nodes with available pods: 0
Feb  3 15:07:02.559: INFO: Node iruya-node is running more than one daemon pod
Feb  3 15:07:03.556: INFO: Number of nodes with available pods: 0
Feb  3 15:07:03.556: INFO: Node iruya-node is running more than one daemon pod
Feb  3 15:07:05.531: INFO: Number of nodes with available pods: 0
Feb  3 15:07:05.531: INFO: Node iruya-node is running more than one daemon pod
Feb  3 15:07:05.954: INFO: Number of nodes with available pods: 0
Feb  3 15:07:05.954: INFO: Node iruya-node is running more than one daemon pod
Feb  3 15:07:06.612: INFO: Number of nodes with available pods: 0
Feb  3 15:07:06.612: INFO: Node iruya-node is running more than one daemon pod
Feb  3 15:07:07.548: INFO: Number of nodes with available pods: 0
Feb  3 15:07:07.548: INFO: Node iruya-node is running more than one daemon pod
Feb  3 15:07:08.558: INFO: Number of nodes with available pods: 0
Feb  3 15:07:08.558: INFO: Node iruya-node is running more than one daemon pod
Feb  3 15:07:09.547: INFO: Number of nodes with available pods: 1
Feb  3 15:07:09.548: INFO: Node iruya-node is running more than one daemon pod
Feb  3 15:07:10.560: INFO: Number of nodes with available pods: 2
Feb  3 15:07:10.560: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Feb  3 15:07:10.725: INFO: Number of nodes with available pods: 1
Feb  3 15:07:10.726: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb  3 15:07:11.949: INFO: Number of nodes with available pods: 1
Feb  3 15:07:11.949: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb  3 15:07:12.861: INFO: Number of nodes with available pods: 1
Feb  3 15:07:12.861: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb  3 15:07:13.743: INFO: Number of nodes with available pods: 1
Feb  3 15:07:13.743: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb  3 15:07:14.925: INFO: Number of nodes with available pods: 1
Feb  3 15:07:14.925: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb  3 15:07:15.746: INFO: Number of nodes with available pods: 1
Feb  3 15:07:15.746: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb  3 15:07:16.755: INFO: Number of nodes with available pods: 1
Feb  3 15:07:16.755: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb  3 15:07:17.743: INFO: Number of nodes with available pods: 1
Feb  3 15:07:17.743: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb  3 15:07:18.998: INFO: Number of nodes with available pods: 1
Feb  3 15:07:18.999: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb  3 15:07:19.877: INFO: Number of nodes with available pods: 1
Feb  3 15:07:19.877: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb  3 15:07:20.744: INFO: Number of nodes with available pods: 1
Feb  3 15:07:20.744: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb  3 15:07:21.740: INFO: Number of nodes with available pods: 2
Feb  3 15:07:21.740: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7066, will wait for the garbage collector to delete the pods
Feb  3 15:07:21.839: INFO: Deleting DaemonSet.extensions daemon-set took: 19.804072ms
Feb  3 15:07:22.340: INFO: Terminating DaemonSet.extensions daemon-set pods took: 500.53797ms
Feb  3 15:07:37.949: INFO: Number of nodes with available pods: 0
Feb  3 15:07:37.950: INFO: Number of running nodes: 0, number of available pods: 0
Feb  3 15:07:37.953: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7066/daemonsets","resourceVersion":"22958991"},"items":null}

Feb  3 15:07:37.956: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7066/pods","resourceVersion":"22958991"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 15:07:37.971: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-7066" for this suite.
Feb  3 15:07:44.000: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 15:07:44.116: INFO: namespace daemonsets-7066 deletion completed in 6.140116242s

• [SLOW TEST:44.721 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 15:07:44.117: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name projected-secret-test-c7320e80-758d-47a0-a7bb-e882f6bc5f05
STEP: Creating a pod to test consume secrets
Feb  3 15:07:44.216: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-3835e8ec-0cc9-420a-b222-5975b8acd51b" in namespace "projected-3133" to be "success or failure"
Feb  3 15:07:44.335: INFO: Pod "pod-projected-secrets-3835e8ec-0cc9-420a-b222-5975b8acd51b": Phase="Pending", Reason="", readiness=false. Elapsed: 118.946337ms
Feb  3 15:07:46.343: INFO: Pod "pod-projected-secrets-3835e8ec-0cc9-420a-b222-5975b8acd51b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.126209989s
Feb  3 15:07:48.353: INFO: Pod "pod-projected-secrets-3835e8ec-0cc9-420a-b222-5975b8acd51b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.136454171s
Feb  3 15:07:50.364: INFO: Pod "pod-projected-secrets-3835e8ec-0cc9-420a-b222-5975b8acd51b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.147382195s
Feb  3 15:07:52.376: INFO: Pod "pod-projected-secrets-3835e8ec-0cc9-420a-b222-5975b8acd51b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.159781266s
Feb  3 15:07:54.385: INFO: Pod "pod-projected-secrets-3835e8ec-0cc9-420a-b222-5975b8acd51b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.168230081s
STEP: Saw pod success
Feb  3 15:07:54.385: INFO: Pod "pod-projected-secrets-3835e8ec-0cc9-420a-b222-5975b8acd51b" satisfied condition "success or failure"
Feb  3 15:07:54.389: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-3835e8ec-0cc9-420a-b222-5975b8acd51b container secret-volume-test: 
STEP: delete the pod
Feb  3 15:07:54.439: INFO: Waiting for pod pod-projected-secrets-3835e8ec-0cc9-420a-b222-5975b8acd51b to disappear
Feb  3 15:07:54.447: INFO: Pod pod-projected-secrets-3835e8ec-0cc9-420a-b222-5975b8acd51b no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 15:07:54.447: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3133" for this suite.
Feb  3 15:08:00.552: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 15:08:00.703: INFO: namespace projected-3133 deletion completed in 6.249647286s

• [SLOW TEST:16.587 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Guestbook application 
  should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 15:08:00.704: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating all guestbook components
Feb  3 15:08:00.833: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-slave
  labels:
    app: redis
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: redis
    role: slave
    tier: backend

Feb  3 15:08:00.833: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9296'
Feb  3 15:08:01.427: INFO: stderr: ""
Feb  3 15:08:01.427: INFO: stdout: "service/redis-slave created\n"
Feb  3 15:08:01.428: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-master
  labels:
    app: redis
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: redis
    role: master
    tier: backend

Feb  3 15:08:01.428: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9296'
Feb  3 15:08:01.897: INFO: stderr: ""
Feb  3 15:08:01.898: INFO: stdout: "service/redis-master created\n"
Feb  3 15:08:01.899: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Feb  3 15:08:01.899: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9296'
Feb  3 15:08:02.591: INFO: stderr: ""
Feb  3 15:08:02.594: INFO: stdout: "service/frontend created\n"
Feb  3 15:08:02.595: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  selector:
    matchLabels:
      app: guestbook
      tier: frontend
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: php-redis
        image: gcr.io/google-samples/gb-frontend:v6
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access environment variables to find service host
          # info, comment out the 'value: dns' line above, and uncomment the
          # line below:
          # value: env
        ports:
        - containerPort: 80

Feb  3 15:08:02.596: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9296'
Feb  3 15:08:02.997: INFO: stderr: ""
Feb  3 15:08:02.997: INFO: stdout: "deployment.apps/frontend created\n"
Feb  3 15:08:02.998: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-master
spec:
  replicas: 1
  selector:
    matchLabels:
      app: redis
      role: master
      tier: backend
  template:
    metadata:
      labels:
        app: redis
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: gcr.io/kubernetes-e2e-test-images/redis:1.0
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Feb  3 15:08:02.999: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9296'
Feb  3 15:08:03.731: INFO: stderr: ""
Feb  3 15:08:03.731: INFO: stdout: "deployment.apps/redis-master created\n"
Feb  3 15:08:03.732: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-slave
spec:
  replicas: 2
  selector:
    matchLabels:
      app: redis
      role: slave
      tier: backend
  template:
    metadata:
      labels:
        app: redis
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/google-samples/gb-redisslave:v3
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access an environment variable to find the master
          # service's host, comment out the 'value: dns' line above, and
          # uncomment the line below:
          # value: env
        ports:
        - containerPort: 6379

Feb  3 15:08:03.733: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9296'
Feb  3 15:08:04.982: INFO: stderr: ""
Feb  3 15:08:04.982: INFO: stdout: "deployment.apps/redis-slave created\n"
STEP: validating guestbook app
Feb  3 15:08:04.982: INFO: Waiting for all frontend pods to be Running.
Feb  3 15:08:25.034: INFO: Waiting for frontend to serve content.
Feb  3 15:08:25.525: INFO: Trying to add a new entry to the guestbook.
Feb  3 15:08:25.580: INFO: Verifying that added entry can be retrieved.
STEP: using delete to clean up resources
Feb  3 15:08:26.043: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9296'
Feb  3 15:08:26.177: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  3 15:08:26.177: INFO: stdout: "service \"redis-slave\" force deleted\n"
STEP: using delete to clean up resources
Feb  3 15:08:26.178: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9296'
Feb  3 15:08:26.478: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  3 15:08:26.482: INFO: stdout: "service \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Feb  3 15:08:26.484: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9296'
Feb  3 15:08:26.648: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  3 15:08:26.648: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Feb  3 15:08:26.649: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9296'
Feb  3 15:08:26.911: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  3 15:08:26.911: INFO: stdout: "deployment.apps \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Feb  3 15:08:26.913: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9296'
Feb  3 15:08:27.269: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  3 15:08:27.269: INFO: stdout: "deployment.apps \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Feb  3 15:08:27.272: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9296'
Feb  3 15:08:27.449: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  3 15:08:27.449: INFO: stdout: "deployment.apps \"redis-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 15:08:27.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9296" for this suite.
Feb  3 15:09:19.504: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 15:09:19.760: INFO: namespace kubectl-9296 deletion completed in 52.307116182s

• [SLOW TEST:79.056 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Guestbook application
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create and stop a working application  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 15:09:19.763: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Starting the proxy
Feb  3 15:09:20.013: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix771076222/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 15:09:20.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1839" for this suite.
Feb  3 15:09:26.108: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 15:09:26.203: INFO: namespace kubectl-1839 deletion completed in 6.110192866s

• [SLOW TEST:6.440 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support --unix-socket=/path  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 15:09:26.204: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on node default medium
Feb  3 15:09:26.324: INFO: Waiting up to 5m0s for pod "pod-ee732000-2761-4245-8ec6-43c922bf79db" in namespace "emptydir-8888" to be "success or failure"
Feb  3 15:09:26.334: INFO: Pod "pod-ee732000-2761-4245-8ec6-43c922bf79db": Phase="Pending", Reason="", readiness=false. Elapsed: 9.229563ms
Feb  3 15:09:28.348: INFO: Pod "pod-ee732000-2761-4245-8ec6-43c922bf79db": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023075487s
Feb  3 15:09:30.358: INFO: Pod "pod-ee732000-2761-4245-8ec6-43c922bf79db": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033731108s
Feb  3 15:09:32.367: INFO: Pod "pod-ee732000-2761-4245-8ec6-43c922bf79db": Phase="Pending", Reason="", readiness=false. Elapsed: 6.04194758s
Feb  3 15:09:34.377: INFO: Pod "pod-ee732000-2761-4245-8ec6-43c922bf79db": Phase="Pending", Reason="", readiness=false. Elapsed: 8.05278779s
Feb  3 15:09:36.944: INFO: Pod "pod-ee732000-2761-4245-8ec6-43c922bf79db": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.619389936s
STEP: Saw pod success
Feb  3 15:09:36.944: INFO: Pod "pod-ee732000-2761-4245-8ec6-43c922bf79db" satisfied condition "success or failure"
Feb  3 15:09:36.951: INFO: Trying to get logs from node iruya-node pod pod-ee732000-2761-4245-8ec6-43c922bf79db container test-container: 
STEP: delete the pod
Feb  3 15:09:37.323: INFO: Waiting for pod pod-ee732000-2761-4245-8ec6-43c922bf79db to disappear
Feb  3 15:09:37.331: INFO: Pod pod-ee732000-2761-4245-8ec6-43c922bf79db no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 15:09:37.332: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8888" for this suite.
Feb  3 15:09:43.374: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 15:09:43.512: INFO: namespace emptydir-8888 deletion completed in 6.174448822s

• [SLOW TEST:17.308 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  3 15:09:43.513: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override arguments
Feb  3 15:09:43.654: INFO: Waiting up to 5m0s for pod "client-containers-7c58e5c2-0d85-4931-94b6-8c089d3f73da" in namespace "containers-8649" to be "success or failure"
Feb  3 15:09:43.734: INFO: Pod "client-containers-7c58e5c2-0d85-4931-94b6-8c089d3f73da": Phase="Pending", Reason="", readiness=false. Elapsed: 80.527221ms
Feb  3 15:09:45.752: INFO: Pod "client-containers-7c58e5c2-0d85-4931-94b6-8c089d3f73da": Phase="Pending", Reason="", readiness=false. Elapsed: 2.097990007s
Feb  3 15:09:47.760: INFO: Pod "client-containers-7c58e5c2-0d85-4931-94b6-8c089d3f73da": Phase="Pending", Reason="", readiness=false. Elapsed: 4.106519014s
Feb  3 15:09:49.771: INFO: Pod "client-containers-7c58e5c2-0d85-4931-94b6-8c089d3f73da": Phase="Pending", Reason="", readiness=false. Elapsed: 6.117379105s
Feb  3 15:09:51.782: INFO: Pod "client-containers-7c58e5c2-0d85-4931-94b6-8c089d3f73da": Phase="Running", Reason="", readiness=true. Elapsed: 8.127757339s
Feb  3 15:09:53.796: INFO: Pod "client-containers-7c58e5c2-0d85-4931-94b6-8c089d3f73da": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.141639118s
STEP: Saw pod success
Feb  3 15:09:53.796: INFO: Pod "client-containers-7c58e5c2-0d85-4931-94b6-8c089d3f73da" satisfied condition "success or failure"
Feb  3 15:09:53.802: INFO: Trying to get logs from node iruya-node pod client-containers-7c58e5c2-0d85-4931-94b6-8c089d3f73da container test-container: 
STEP: delete the pod
Feb  3 15:09:53.898: INFO: Waiting for pod client-containers-7c58e5c2-0d85-4931-94b6-8c089d3f73da to disappear
Feb  3 15:09:53.905: INFO: Pod client-containers-7c58e5c2-0d85-4931-94b6-8c089d3f73da no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  3 15:09:53.905: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-8649" for this suite.
Feb  3 15:10:00.024: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  3 15:10:00.174: INFO: namespace containers-8649 deletion completed in 6.259398847s

• [SLOW TEST:16.661 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSFeb  3 15:10:00.175: INFO: Running AfterSuite actions on all nodes
Feb  3 15:10:00.175: INFO: Running AfterSuite actions on node 1
Feb  3 15:10:00.175: INFO: Skipping dumping logs from cluster

Ran 215 of 4412 Specs in 8028.218 seconds
SUCCESS! -- 215 Passed | 0 Failed | 0 Pending | 4197 Skipped
PASS