I0222 12:56:11.246811 8 e2e.go:243] Starting e2e run "6fd81cb8-7201-4007-8ed7-6e093311ea59" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1582376170 - Will randomize all specs Will run 215 of 4412 specs Feb 22 12:56:11.610: INFO: >>> kubeConfig: /root/.kube/config Feb 22 12:56:11.615: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Feb 22 12:56:11.650: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Feb 22 12:56:11.686: INFO: 10 / 10 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Feb 22 12:56:11.686: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Feb 22 12:56:11.686: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Feb 22 12:56:11.698: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Feb 22 12:56:11.698: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Feb 22 12:56:11.698: INFO: e2e test version: v1.15.7 Feb 22 12:56:11.699: INFO: kube-apiserver version: v1.15.1 SSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 22 12:56:11.700: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset Feb 22 12:56:11.892: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-6567 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a new StatefulSet Feb 22 12:56:11.961: INFO: Found 0 stateful pods, waiting for 3 Feb 22 12:56:21.978: INFO: Found 1 stateful pods, waiting for 3 Feb 22 12:56:31.976: INFO: Found 2 stateful pods, waiting for 3 Feb 22 12:56:41.996: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Feb 22 12:56:41.999: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Feb 22 12:56:41.999: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Feb 22 12:56:51.980: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Feb 22 12:56:51.980: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Feb 22 12:56:51.980: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Feb 22 12:56:52.029: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Feb 22 12:57:02.109: INFO: Updating stateful set ss2 Feb 22 12:57:02.141: INFO: Waiting for Pod statefulset-6567/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Restoring Pods to the correct revision when they are deleted Feb 22 12:57:12.679: INFO: Found 2 stateful pods, waiting for 3 Feb 22 12:57:22.694: INFO: Found 2 stateful pods, waiting for 3 Feb 22 12:57:33.875: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Feb 22 12:57:33.875: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Feb 22 12:57:33.875: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Feb 22 12:57:42.691: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Feb 22 12:57:42.692: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Feb 22 12:57:42.692: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Feb 22 12:57:42.729: INFO: Updating stateful set ss2 Feb 22 12:57:42.874: INFO: Waiting for Pod statefulset-6567/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 22 12:57:52.925: INFO: Updating stateful set ss2 Feb 22 12:57:52.937: INFO: Waiting for StatefulSet statefulset-6567/ss2 to complete update Feb 22 12:57:52.937: INFO: Waiting for Pod statefulset-6567/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 22 12:58:02.955: INFO: Waiting for StatefulSet statefulset-6567/ss2 to complete update Feb 22 12:58:02.955: INFO: Waiting for Pod statefulset-6567/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 22 12:58:12.987: INFO: Waiting for StatefulSet statefulset-6567/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Feb 22 12:58:22.957: INFO: Deleting all statefulset in ns statefulset-6567 Feb 22 12:58:22.970: INFO: Scaling statefulset ss2 to 0 Feb 22 12:59:03.027: INFO: Waiting for statefulset status.replicas updated to 0 Feb 22 12:59:03.031: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 22 12:59:03.047: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-6567" for this suite. Feb 22 12:59:11.124: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 22 12:59:11.284: INFO: namespace statefulset-6567 deletion completed in 8.232538202s • [SLOW TEST:179.585 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 22 12:59:11.287: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Feb 22 13:02:13.594: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 22 13:02:13.687: INFO: Pod pod-with-poststart-exec-hook still exists Feb 22 13:02:15.687: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 22 13:02:15.699: INFO: Pod pod-with-poststart-exec-hook still exists Feb 22 13:02:17.687: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 22 13:02:17.701: INFO: Pod pod-with-poststart-exec-hook still exists Feb 22 13:02:19.687: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 22 13:02:19.695: INFO: Pod pod-with-poststart-exec-hook still exists Feb 22 13:02:21.687: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 22 13:02:21.697: INFO: Pod pod-with-poststart-exec-hook still exists Feb 22 13:02:23.687: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 22 13:02:23.697: INFO: Pod pod-with-poststart-exec-hook still exists Feb 22 13:02:25.687: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 22 13:02:25.700: INFO: Pod pod-with-poststart-exec-hook still exists Feb 22 13:02:27.687: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 22 13:02:27.701: INFO: Pod pod-with-poststart-exec-hook still exists Feb 22 13:02:29.687: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 22 13:02:29.696: INFO: Pod pod-with-poststart-exec-hook still exists Feb 22 13:02:31.687: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 22 13:02:31.700: INFO: Pod pod-with-poststart-exec-hook still exists Feb 22 13:02:33.687: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 22 13:02:33.697: INFO: Pod pod-with-poststart-exec-hook still exists Feb 22 13:02:35.687: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 22 13:02:35.696: INFO: Pod pod-with-poststart-exec-hook still exists Feb 22 13:02:37.687: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 22 13:02:37.695: INFO: Pod pod-with-poststart-exec-hook still exists Feb 22 13:02:39.687: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 22 13:02:39.696: INFO: Pod pod-with-poststart-exec-hook still exists Feb 22 13:02:41.687: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 22 13:02:41.699: INFO: Pod pod-with-poststart-exec-hook still exists Feb 22 13:02:43.688: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 22 13:02:43.719: INFO: Pod pod-with-poststart-exec-hook still exists Feb 22 13:02:45.688: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 22 13:02:45.704: INFO: Pod pod-with-poststart-exec-hook still exists Feb 22 13:02:47.688: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 22 13:02:47.719: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 22 13:02:47.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-999" for this suite. Feb 22 13:03:09.818: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 22 13:03:10.054: INFO: namespace container-lifecycle-hook-999 deletion completed in 22.323674629s • [SLOW TEST:238.768 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 22 13:03:10.055: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 22 13:03:20.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-2211" for this suite. Feb 22 13:04:12.341: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 22 13:04:12.480: INFO: namespace kubelet-test-2211 deletion completed in 52.242962042s • [SLOW TEST:62.425 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 22 13:04:12.482: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Feb 22 13:04:12.621: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Feb 22 13:04:12.641: INFO: Pod name sample-pod: Found 0 pods out of 1 Feb 22 13:04:17.697: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Feb 22 13:04:19.708: INFO: Creating deployment "test-rolling-update-deployment" Feb 22 13:04:19.715: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Feb 22 13:04:19.726: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Feb 22 13:04:21.766: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Feb 22 13:04:21.772: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717973459, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717973459, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717973459, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717973459, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 22 13:04:23.789: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717973459, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717973459, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717973459, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717973459, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 22 13:04:25.807: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717973459, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717973459, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717973459, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717973459, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 22 13:04:27.826: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717973459, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717973459, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717973459, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717973459, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 22 13:04:31.686: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Feb 22 13:04:31.754: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:deployment-917,SelfLink:/apis/apps/v1/namespaces/deployment-917/deployments/test-rolling-update-deployment,UID:e412010f-627f-4878-b1ed-00870579d4f1,ResourceVersion:25321498,Generation:1,CreationTimestamp:2020-02-22 13:04:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-02-22 13:04:19 +0000 UTC 2020-02-22 13:04:19 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-02-22 13:04:27 +0000 UTC 2020-02-22 13:04:19 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-79f6b9d75c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Feb 22 13:04:31.762: INFO: New ReplicaSet "test-rolling-update-deployment-79f6b9d75c" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c,GenerateName:,Namespace:deployment-917,SelfLink:/apis/apps/v1/namespaces/deployment-917/replicasets/test-rolling-update-deployment-79f6b9d75c,UID:a19e12a0-8493-41bd-b4c8-fd0b5931bf05,ResourceVersion:25321486,Generation:1,CreationTimestamp:2020-02-22 13:04:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment e412010f-627f-4878-b1ed-00870579d4f1 0xc001f6d647 0xc001f6d648}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Feb 22 13:04:31.762: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Feb 22 13:04:31.762: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:deployment-917,SelfLink:/apis/apps/v1/namespaces/deployment-917/replicasets/test-rolling-update-controller,UID:429d6cf0-079f-476f-8e8f-0a11b35282e8,ResourceVersion:25321497,Generation:2,CreationTimestamp:2020-02-22 13:04:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment e412010f-627f-4878-b1ed-00870579d4f1 0xc001f6d567 0xc001f6d568}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Feb 22 13:04:31.855: INFO: Pod "test-rolling-update-deployment-79f6b9d75c-vrcvb" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c-vrcvb,GenerateName:test-rolling-update-deployment-79f6b9d75c-,Namespace:deployment-917,SelfLink:/api/v1/namespaces/deployment-917/pods/test-rolling-update-deployment-79f6b9d75c-vrcvb,UID:9bb35f0c-d4f9-4664-befd-7cfaf7dc5f9f,ResourceVersion:25321485,Generation:0,CreationTimestamp:2020-02-22 13:04:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-79f6b9d75c a19e12a0-8493-41bd-b4c8-fd0b5931bf05 0xc001f6df37 0xc001f6df38}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-42rpj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-42rpj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-42rpj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f6dfb0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f6dfd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-22 13:04:19 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-22 13:04:27 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-22 13:04:27 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-22 13:04:19 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2020-02-22 13:04:19 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-02-22 13:04:27 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://b55b60c24e80198cae7afe83b92b6873b20039d3c2f87780de9296c8b993e2d1}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 22 13:04:31.856: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-917" for this suite. Feb 22 13:04:37.939: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 22 13:04:38.022: INFO: namespace deployment-917 deletion completed in 6.150082003s • [SLOW TEST:25.540 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 22 13:04:38.022: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir volume type on tmpfs Feb 22 13:04:38.103: INFO: Waiting up to 5m0s for pod "pod-a2ecb585-96e6-4d33-9900-362ba4874d8b" in namespace "emptydir-4407" to be "success or failure" Feb 22 13:04:38.191: INFO: Pod "pod-a2ecb585-96e6-4d33-9900-362ba4874d8b": Phase="Pending", Reason="", readiness=false. Elapsed: 87.757925ms Feb 22 13:04:40.204: INFO: Pod "pod-a2ecb585-96e6-4d33-9900-362ba4874d8b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.100842254s Feb 22 13:04:42.221: INFO: Pod "pod-a2ecb585-96e6-4d33-9900-362ba4874d8b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.118229873s Feb 22 13:04:44.375: INFO: Pod "pod-a2ecb585-96e6-4d33-9900-362ba4874d8b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.272353898s Feb 22 13:04:46.386: INFO: Pod "pod-a2ecb585-96e6-4d33-9900-362ba4874d8b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.28358895s Feb 22 13:04:48.404: INFO: Pod "pod-a2ecb585-96e6-4d33-9900-362ba4874d8b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.300680078s STEP: Saw pod success Feb 22 13:04:48.404: INFO: Pod "pod-a2ecb585-96e6-4d33-9900-362ba4874d8b" satisfied condition "success or failure" Feb 22 13:04:48.409: INFO: Trying to get logs from node iruya-node pod pod-a2ecb585-96e6-4d33-9900-362ba4874d8b container test-container: STEP: delete the pod Feb 22 13:04:48.462: INFO: Waiting for pod pod-a2ecb585-96e6-4d33-9900-362ba4874d8b to disappear Feb 22 13:04:48.467: INFO: Pod pod-a2ecb585-96e6-4d33-9900-362ba4874d8b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 22 13:04:48.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4407" for this suite. Feb 22 13:04:54.495: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 22 13:04:54.626: INFO: namespace emptydir-4407 deletion completed in 6.153665059s • [SLOW TEST:16.604 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 22 13:04:54.627: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Feb 22 13:04:54.698: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-3123,SelfLink:/api/v1/namespaces/watch-3123/configmaps/e2e-watch-test-configmap-a,UID:c0acc1a7-9e98-4eea-935f-619aad2fa9d1,ResourceVersion:25321585,Generation:0,CreationTimestamp:2020-02-22 13:04:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Feb 22 13:04:54.699: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-3123,SelfLink:/api/v1/namespaces/watch-3123/configmaps/e2e-watch-test-configmap-a,UID:c0acc1a7-9e98-4eea-935f-619aad2fa9d1,ResourceVersion:25321585,Generation:0,CreationTimestamp:2020-02-22 13:04:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification Feb 22 13:05:04.724: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-3123,SelfLink:/api/v1/namespaces/watch-3123/configmaps/e2e-watch-test-configmap-a,UID:c0acc1a7-9e98-4eea-935f-619aad2fa9d1,ResourceVersion:25321599,Generation:0,CreationTimestamp:2020-02-22 13:04:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Feb 22 13:05:04.724: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-3123,SelfLink:/api/v1/namespaces/watch-3123/configmaps/e2e-watch-test-configmap-a,UID:c0acc1a7-9e98-4eea-935f-619aad2fa9d1,ResourceVersion:25321599,Generation:0,CreationTimestamp:2020-02-22 13:04:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Feb 22 13:05:14.740: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-3123,SelfLink:/api/v1/namespaces/watch-3123/configmaps/e2e-watch-test-configmap-a,UID:c0acc1a7-9e98-4eea-935f-619aad2fa9d1,ResourceVersion:25321613,Generation:0,CreationTimestamp:2020-02-22 13:04:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Feb 22 13:05:14.741: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-3123,SelfLink:/api/v1/namespaces/watch-3123/configmaps/e2e-watch-test-configmap-a,UID:c0acc1a7-9e98-4eea-935f-619aad2fa9d1,ResourceVersion:25321613,Generation:0,CreationTimestamp:2020-02-22 13:04:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification Feb 22 13:05:24.768: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-3123,SelfLink:/api/v1/namespaces/watch-3123/configmaps/e2e-watch-test-configmap-a,UID:c0acc1a7-9e98-4eea-935f-619aad2fa9d1,ResourceVersion:25321626,Generation:0,CreationTimestamp:2020-02-22 13:04:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Feb 22 13:05:24.768: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-3123,SelfLink:/api/v1/namespaces/watch-3123/configmaps/e2e-watch-test-configmap-a,UID:c0acc1a7-9e98-4eea-935f-619aad2fa9d1,ResourceVersion:25321626,Generation:0,CreationTimestamp:2020-02-22 13:04:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Feb 22 13:05:34.794: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-3123,SelfLink:/api/v1/namespaces/watch-3123/configmaps/e2e-watch-test-configmap-b,UID:f205c268-4cba-466f-b503-b17873c5bbaa,ResourceVersion:25321640,Generation:0,CreationTimestamp:2020-02-22 13:05:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Feb 22 13:05:34.795: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-3123,SelfLink:/api/v1/namespaces/watch-3123/configmaps/e2e-watch-test-configmap-b,UID:f205c268-4cba-466f-b503-b17873c5bbaa,ResourceVersion:25321640,Generation:0,CreationTimestamp:2020-02-22 13:05:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification Feb 22 13:05:44.806: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-3123,SelfLink:/api/v1/namespaces/watch-3123/configmaps/e2e-watch-test-configmap-b,UID:f205c268-4cba-466f-b503-b17873c5bbaa,ResourceVersion:25321655,Generation:0,CreationTimestamp:2020-02-22 13:05:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Feb 22 13:05:44.807: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-3123,SelfLink:/api/v1/namespaces/watch-3123/configmaps/e2e-watch-test-configmap-b,UID:f205c268-4cba-466f-b503-b17873c5bbaa,ResourceVersion:25321655,Generation:0,CreationTimestamp:2020-02-22 13:05:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 22 13:05:54.807: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-3123" for this suite. Feb 22 13:06:00.933: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 22 13:06:01.039: INFO: namespace watch-3123 deletion completed in 6.133966541s • [SLOW TEST:66.412 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 22 13:06:01.041: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-9555 STEP: creating a selector STEP: Creating the service pods in kubernetes Feb 22 13:06:01.228: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Feb 22 13:06:44.203: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-9555 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 22 13:06:44.203: INFO: >>> kubeConfig: /root/.kube/config I0222 13:06:44.263632 8 log.go:172] (0xc002c2e000) (0xc00038cd20) Create stream I0222 13:06:44.263733 8 log.go:172] (0xc002c2e000) (0xc00038cd20) Stream added, broadcasting: 1 I0222 13:06:44.287964 8 log.go:172] (0xc002c2e000) Reply frame received for 1 I0222 13:06:44.288038 8 log.go:172] (0xc002c2e000) (0xc0019d8000) Create stream I0222 13:06:44.288051 8 log.go:172] (0xc002c2e000) (0xc0019d8000) Stream added, broadcasting: 3 I0222 13:06:44.290364 8 log.go:172] (0xc002c2e000) Reply frame received for 3 I0222 13:06:44.290448 8 log.go:172] (0xc002c2e000) (0xc001b6c000) Create stream I0222 13:06:44.290458 8 log.go:172] (0xc002c2e000) (0xc001b6c000) Stream added, broadcasting: 5 I0222 13:06:44.292140 8 log.go:172] (0xc002c2e000) Reply frame received for 5 I0222 13:06:44.446959 8 log.go:172] (0xc002c2e000) Data frame received for 3 I0222 13:06:44.447175 8 log.go:172] (0xc0019d8000) (3) Data frame handling I0222 13:06:44.447242 8 log.go:172] (0xc0019d8000) (3) Data frame sent I0222 13:06:44.775530 8 log.go:172] (0xc002c2e000) Data frame received for 1 I0222 13:06:44.776038 8 log.go:172] (0xc002c2e000) (0xc001b6c000) Stream removed, broadcasting: 5 I0222 13:06:44.776290 8 log.go:172] (0xc00038cd20) (1) Data frame handling I0222 13:06:44.776378 8 log.go:172] (0xc00038cd20) (1) Data frame sent I0222 13:06:44.776515 8 log.go:172] (0xc002c2e000) (0xc0019d8000) Stream removed, broadcasting: 3 I0222 13:06:44.776620 8 log.go:172] (0xc002c2e000) (0xc00038cd20) Stream removed, broadcasting: 1 I0222 13:06:44.776718 8 log.go:172] (0xc002c2e000) Go away received I0222 13:06:44.780225 8 log.go:172] (0xc002c2e000) (0xc00038cd20) Stream removed, broadcasting: 1 I0222 13:06:44.780369 8 log.go:172] (0xc002c2e000) (0xc0019d8000) Stream removed, broadcasting: 3 I0222 13:06:44.780400 8 log.go:172] (0xc002c2e000) (0xc001b6c000) Stream removed, broadcasting: 5 Feb 22 13:06:44.780: INFO: Found all expected endpoints: [netserver-0] Feb 22 13:06:44.794: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.44.0.1:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-9555 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 22 13:06:44.795: INFO: >>> kubeConfig: /root/.kube/config I0222 13:06:44.903161 8 log.go:172] (0xc000818840) (0xc0019d8320) Create stream I0222 13:06:44.903405 8 log.go:172] (0xc000818840) (0xc0019d8320) Stream added, broadcasting: 1 I0222 13:06:44.928472 8 log.go:172] (0xc000818840) Reply frame received for 1 I0222 13:06:44.928627 8 log.go:172] (0xc000818840) (0xc001baa000) Create stream I0222 13:06:44.928643 8 log.go:172] (0xc000818840) (0xc001baa000) Stream added, broadcasting: 3 I0222 13:06:44.936104 8 log.go:172] (0xc000818840) Reply frame received for 3 I0222 13:06:44.936309 8 log.go:172] (0xc000818840) (0xc001baa0a0) Create stream I0222 13:06:44.936328 8 log.go:172] (0xc000818840) (0xc001baa0a0) Stream added, broadcasting: 5 I0222 13:06:44.946925 8 log.go:172] (0xc000818840) Reply frame received for 5 I0222 13:06:45.147476 8 log.go:172] (0xc000818840) Data frame received for 3 I0222 13:06:45.147520 8 log.go:172] (0xc001baa000) (3) Data frame handling I0222 13:06:45.147541 8 log.go:172] (0xc001baa000) (3) Data frame sent I0222 13:06:45.252610 8 log.go:172] (0xc000818840) (0xc001baa0a0) Stream removed, broadcasting: 5 I0222 13:06:45.252752 8 log.go:172] (0xc000818840) Data frame received for 1 I0222 13:06:45.252790 8 log.go:172] (0xc000818840) (0xc001baa000) Stream removed, broadcasting: 3 I0222 13:06:45.252888 8 log.go:172] (0xc0019d8320) (1) Data frame handling I0222 13:06:45.252938 8 log.go:172] (0xc0019d8320) (1) Data frame sent I0222 13:06:45.252954 8 log.go:172] (0xc000818840) (0xc0019d8320) Stream removed, broadcasting: 1 I0222 13:06:45.252974 8 log.go:172] (0xc000818840) Go away received I0222 13:06:45.254054 8 log.go:172] (0xc000818840) (0xc0019d8320) Stream removed, broadcasting: 1 I0222 13:06:45.254144 8 log.go:172] (0xc000818840) (0xc001baa000) Stream removed, broadcasting: 3 I0222 13:06:45.254149 8 log.go:172] (0xc000818840) (0xc001baa0a0) Stream removed, broadcasting: 5 Feb 22 13:06:45.254: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 22 13:06:45.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-9555" for this suite. Feb 22 13:07:09.313: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 22 13:07:09.404: INFO: namespace pod-network-test-9555 deletion completed in 24.142037209s • [SLOW TEST:68.364 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 22 13:07:09.405: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Feb 22 13:07:09.620: INFO: Waiting up to 5m0s for pod "downward-api-c5847406-3f7d-4b09-9b6a-0cb9fde26c43" in namespace "downward-api-4242" to be "success or failure" Feb 22 13:07:09.769: INFO: Pod "downward-api-c5847406-3f7d-4b09-9b6a-0cb9fde26c43": Phase="Pending", Reason="", readiness=false. Elapsed: 148.558026ms Feb 22 13:07:11.779: INFO: Pod "downward-api-c5847406-3f7d-4b09-9b6a-0cb9fde26c43": Phase="Pending", Reason="", readiness=false. Elapsed: 2.158471201s Feb 22 13:07:13.802: INFO: Pod "downward-api-c5847406-3f7d-4b09-9b6a-0cb9fde26c43": Phase="Pending", Reason="", readiness=false. Elapsed: 4.181986408s Feb 22 13:07:15.811: INFO: Pod "downward-api-c5847406-3f7d-4b09-9b6a-0cb9fde26c43": Phase="Pending", Reason="", readiness=false. Elapsed: 6.190404926s Feb 22 13:07:17.822: INFO: Pod "downward-api-c5847406-3f7d-4b09-9b6a-0cb9fde26c43": Phase="Pending", Reason="", readiness=false. Elapsed: 8.201396704s Feb 22 13:07:21.877: INFO: Pod "downward-api-c5847406-3f7d-4b09-9b6a-0cb9fde26c43": Phase="Pending", Reason="", readiness=false. Elapsed: 12.257069893s Feb 22 13:07:23.894: INFO: Pod "downward-api-c5847406-3f7d-4b09-9b6a-0cb9fde26c43": Phase="Pending", Reason="", readiness=false. Elapsed: 14.274264136s Feb 22 13:07:25.903: INFO: Pod "downward-api-c5847406-3f7d-4b09-9b6a-0cb9fde26c43": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.283309976s STEP: Saw pod success Feb 22 13:07:25.904: INFO: Pod "downward-api-c5847406-3f7d-4b09-9b6a-0cb9fde26c43" satisfied condition "success or failure" Feb 22 13:07:25.909: INFO: Trying to get logs from node iruya-node pod downward-api-c5847406-3f7d-4b09-9b6a-0cb9fde26c43 container dapi-container: STEP: delete the pod Feb 22 13:07:26.399: INFO: Waiting for pod downward-api-c5847406-3f7d-4b09-9b6a-0cb9fde26c43 to disappear Feb 22 13:07:26.410: INFO: Pod downward-api-c5847406-3f7d-4b09-9b6a-0cb9fde26c43 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 22 13:07:26.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4242" for this suite. Feb 22 13:07:32.455: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 22 13:07:32.556: INFO: namespace downward-api-4242 deletion completed in 6.139161725s • [SLOW TEST:23.151 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 22 13:07:32.557: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Feb 22 13:07:41.207: INFO: Waiting up to 5m0s for pod "client-envvars-c89c5105-857a-42ca-8570-75b3c76f3484" in namespace "pods-7423" to be "success or failure" Feb 22 13:07:41.220: INFO: Pod "client-envvars-c89c5105-857a-42ca-8570-75b3c76f3484": Phase="Pending", Reason="", readiness=false. Elapsed: 11.945109ms Feb 22 13:07:43.230: INFO: Pod "client-envvars-c89c5105-857a-42ca-8570-75b3c76f3484": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022078451s Feb 22 13:07:45.241: INFO: Pod "client-envvars-c89c5105-857a-42ca-8570-75b3c76f3484": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032645743s Feb 22 13:07:47.340: INFO: Pod "client-envvars-c89c5105-857a-42ca-8570-75b3c76f3484": Phase="Pending", Reason="", readiness=false. Elapsed: 6.132341977s Feb 22 13:07:50.465: INFO: Pod "client-envvars-c89c5105-857a-42ca-8570-75b3c76f3484": Phase="Pending", Reason="", readiness=false. Elapsed: 9.256784892s Feb 22 13:07:52.478: INFO: Pod "client-envvars-c89c5105-857a-42ca-8570-75b3c76f3484": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.269590046s STEP: Saw pod success Feb 22 13:07:52.478: INFO: Pod "client-envvars-c89c5105-857a-42ca-8570-75b3c76f3484" satisfied condition "success or failure" Feb 22 13:07:52.487: INFO: Trying to get logs from node iruya-node pod client-envvars-c89c5105-857a-42ca-8570-75b3c76f3484 container env3cont: STEP: delete the pod Feb 22 13:07:52.662: INFO: Waiting for pod client-envvars-c89c5105-857a-42ca-8570-75b3c76f3484 to disappear Feb 22 13:07:52.742: INFO: Pod client-envvars-c89c5105-857a-42ca-8570-75b3c76f3484 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 22 13:07:52.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7423" for this suite. Feb 22 13:08:34.776: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 22 13:08:34.919: INFO: namespace pods-7423 deletion completed in 42.170843233s • [SLOW TEST:62.362 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 22 13:08:34.920: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-3e2af427-719a-42e1-b7ae-615c30c3bf83 STEP: Creating a pod to test consume configMaps Feb 22 13:08:35.072: INFO: Waiting up to 5m0s for pod "pod-configmaps-a9c9f123-8a5b-4133-9442-bd866071aac6" in namespace "configmap-1011" to be "success or failure" Feb 22 13:08:35.079: INFO: Pod "pod-configmaps-a9c9f123-8a5b-4133-9442-bd866071aac6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.615842ms Feb 22 13:08:37.091: INFO: Pod "pod-configmaps-a9c9f123-8a5b-4133-9442-bd866071aac6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018903074s Feb 22 13:08:39.128: INFO: Pod "pod-configmaps-a9c9f123-8a5b-4133-9442-bd866071aac6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.055922864s Feb 22 13:08:41.170: INFO: Pod "pod-configmaps-a9c9f123-8a5b-4133-9442-bd866071aac6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.097895157s Feb 22 13:08:43.186: INFO: Pod "pod-configmaps-a9c9f123-8a5b-4133-9442-bd866071aac6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.113919055s Feb 22 13:08:45.244: INFO: Pod "pod-configmaps-a9c9f123-8a5b-4133-9442-bd866071aac6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.171931019s STEP: Saw pod success Feb 22 13:08:45.244: INFO: Pod "pod-configmaps-a9c9f123-8a5b-4133-9442-bd866071aac6" satisfied condition "success or failure" Feb 22 13:08:45.250: INFO: Trying to get logs from node iruya-node pod pod-configmaps-a9c9f123-8a5b-4133-9442-bd866071aac6 container configmap-volume-test: STEP: delete the pod Feb 22 13:08:45.326: INFO: Waiting for pod pod-configmaps-a9c9f123-8a5b-4133-9442-bd866071aac6 to disappear Feb 22 13:08:45.340: INFO: Pod pod-configmaps-a9c9f123-8a5b-4133-9442-bd866071aac6 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 22 13:08:45.341: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1011" for this suite. Feb 22 13:08:51.409: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 22 13:08:51.559: INFO: namespace configmap-1011 deletion completed in 6.175308823s • [SLOW TEST:16.639 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 22 13:08:51.559: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Feb 22 13:08:51.771: INFO: Number of nodes with available pods: 0 Feb 22 13:08:51.771: INFO: Node iruya-node is running more than one daemon pod Feb 22 13:08:52.816: INFO: Number of nodes with available pods: 0 Feb 22 13:08:52.816: INFO: Node iruya-node is running more than one daemon pod Feb 22 13:08:53.797: INFO: Number of nodes with available pods: 0 Feb 22 13:08:53.797: INFO: Node iruya-node is running more than one daemon pod Feb 22 13:08:54.839: INFO: Number of nodes with available pods: 0 Feb 22 13:08:54.840: INFO: Node iruya-node is running more than one daemon pod Feb 22 13:08:55.815: INFO: Number of nodes with available pods: 0 Feb 22 13:08:55.815: INFO: Node iruya-node is running more than one daemon pod Feb 22 13:08:56.833: INFO: Number of nodes with available pods: 0 Feb 22 13:08:56.833: INFO: Node iruya-node is running more than one daemon pod Feb 22 13:08:59.881: INFO: Number of nodes with available pods: 0 Feb 22 13:08:59.881: INFO: Node iruya-node is running more than one daemon pod Feb 22 13:09:00.797: INFO: Number of nodes with available pods: 0 Feb 22 13:09:00.797: INFO: Node iruya-node is running more than one daemon pod Feb 22 13:09:01.799: INFO: Number of nodes with available pods: 1 Feb 22 13:09:01.799: INFO: Node iruya-node is running more than one daemon pod Feb 22 13:09:02.788: INFO: Number of nodes with available pods: 1 Feb 22 13:09:02.788: INFO: Node iruya-node is running more than one daemon pod Feb 22 13:09:03.795: INFO: Number of nodes with available pods: 2 Feb 22 13:09:03.795: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Feb 22 13:09:03.902: INFO: Number of nodes with available pods: 2 Feb 22 13:09:03.902: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4797, will wait for the garbage collector to delete the pods Feb 22 13:09:05.199: INFO: Deleting DaemonSet.extensions daemon-set took: 10.037329ms Feb 22 13:09:05.600: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.730313ms Feb 22 13:09:16.640: INFO: Number of nodes with available pods: 0 Feb 22 13:09:16.640: INFO: Number of running nodes: 0, number of available pods: 0 Feb 22 13:09:16.649: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4797/daemonsets","resourceVersion":"25322143"},"items":null} Feb 22 13:09:16.653: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4797/pods","resourceVersion":"25322143"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 22 13:09:16.675: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-4797" for this suite. Feb 22 13:09:22.705: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 22 13:09:22.816: INFO: namespace daemonsets-4797 deletion completed in 6.1379266s • [SLOW TEST:31.257 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 22 13:09:22.816: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Feb 22 13:09:22.980: INFO: Pod name rollover-pod: Found 0 pods out of 1 Feb 22 13:09:27.990: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Feb 22 13:09:32.015: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Feb 22 13:09:34.023: INFO: Creating deployment "test-rollover-deployment" Feb 22 13:09:34.047: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Feb 22 13:09:36.059: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Feb 22 13:09:36.070: INFO: Ensure that both replica sets have 1 created replica Feb 22 13:09:36.075: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Feb 22 13:09:36.081: INFO: Updating deployment test-rollover-deployment Feb 22 13:09:36.081: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Feb 22 13:09:38.093: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Feb 22 13:09:38.102: INFO: Make sure deployment "test-rollover-deployment" is complete Feb 22 13:09:38.110: INFO: all replica sets need to contain the pod-template-hash label Feb 22 13:09:38.110: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717973774, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717973774, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717973776, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717973774, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 22 13:09:40.134: INFO: all replica sets need to contain the pod-template-hash label Feb 22 13:09:40.135: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717973774, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717973774, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717973776, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717973774, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 22 13:09:42.128: INFO: all replica sets need to contain the pod-template-hash label Feb 22 13:09:42.129: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717973774, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717973774, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717973776, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717973774, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 22 13:09:44.156: INFO: all replica sets need to contain the pod-template-hash label Feb 22 13:09:44.157: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717973774, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717973774, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717973776, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717973774, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 22 13:09:46.125: INFO: all replica sets need to contain the pod-template-hash label Feb 22 13:09:46.125: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717973774, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717973774, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717973776, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717973774, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 22 13:09:48.258: INFO: all replica sets need to contain the pod-template-hash label Feb 22 13:09:48.258: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717973774, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717973774, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717973776, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717973774, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 22 13:09:50.127: INFO: all replica sets need to contain the pod-template-hash label Feb 22 13:09:50.127: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717973774, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717973774, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717973788, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717973774, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 22 13:09:52.124: INFO: all replica sets need to contain the pod-template-hash label Feb 22 13:09:52.125: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717973774, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717973774, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717973788, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717973774, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 22 13:09:54.155: INFO: all replica sets need to contain the pod-template-hash label Feb 22 13:09:54.155: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717973774, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717973774, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717973788, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717973774, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 22 13:09:56.132: INFO: all replica sets need to contain the pod-template-hash label Feb 22 13:09:56.133: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717973774, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717973774, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717973788, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717973774, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 22 13:09:58.738: INFO: Feb 22 13:09:58.738: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717973774, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717973774, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717973788, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717973774, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 22 13:10:00.125: INFO: Feb 22 13:10:00.125: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Feb 22 13:10:00.138: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:deployment-1983,SelfLink:/apis/apps/v1/namespaces/deployment-1983/deployments/test-rollover-deployment,UID:261b0d32-d9f0-4f59-8c10-6bba306d2f43,ResourceVersion:25322290,Generation:2,CreationTimestamp:2020-02-22 13:09:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-02-22 13:09:34 +0000 UTC 2020-02-22 13:09:34 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-02-22 13:09:58 +0000 UTC 2020-02-22 13:09:34 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-854595fc44" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Feb 22 13:10:00.142: INFO: New ReplicaSet "test-rollover-deployment-854595fc44" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44,GenerateName:,Namespace:deployment-1983,SelfLink:/apis/apps/v1/namespaces/deployment-1983/replicasets/test-rollover-deployment-854595fc44,UID:8c618d32-b934-43d1-b9ac-4738a77a29f1,ResourceVersion:25322280,Generation:2,CreationTimestamp:2020-02-22 13:09:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 261b0d32-d9f0-4f59-8c10-6bba306d2f43 0xc001fa41c7 0xc001fa41c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Feb 22 13:10:00.142: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Feb 22 13:10:00.142: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:deployment-1983,SelfLink:/apis/apps/v1/namespaces/deployment-1983/replicasets/test-rollover-controller,UID:4a80cbc9-48ec-4fac-a893-896e94856fdf,ResourceVersion:25322289,Generation:2,CreationTimestamp:2020-02-22 13:09:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 261b0d32-d9f0-4f59-8c10-6bba306d2f43 0xc001fa40f7 0xc001fa40f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Feb 22 13:10:00.142: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-9b8b997cf,GenerateName:,Namespace:deployment-1983,SelfLink:/apis/apps/v1/namespaces/deployment-1983/replicasets/test-rollover-deployment-9b8b997cf,UID:8fe91957-1cd8-48de-b91f-e64c19543008,ResourceVersion:25322240,Generation:2,CreationTimestamp:2020-02-22 13:09:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 261b0d32-d9f0-4f59-8c10-6bba306d2f43 0xc001fa4290 0xc001fa4291}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Feb 22 13:10:00.146: INFO: Pod "test-rollover-deployment-854595fc44-jgm4c" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44-jgm4c,GenerateName:test-rollover-deployment-854595fc44-,Namespace:deployment-1983,SelfLink:/api/v1/namespaces/deployment-1983/pods/test-rollover-deployment-854595fc44-jgm4c,UID:e2dcba22-dd53-4b13-a05e-3476ea8b0711,ResourceVersion:25322263,Generation:0,CreationTimestamp:2020-02-22 13:09:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-854595fc44 8c618d32-b934-43d1-b9ac-4738a77a29f1 0xc001fa4e87 0xc001fa4e88}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-qwlrz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qwlrz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-qwlrz true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001fa4f00} {node.kubernetes.io/unreachable Exists NoExecute 0xc001fa4f20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-22 13:09:36 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-22 13:09:48 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-22 13:09:48 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-22 13:09:36 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2020-02-22 13:09:36 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-02-22 13:09:47 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://55c734e010db2c70e575e2f6112d74bd53096f73181b1c863559ff1cd174dcfc}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 22 13:10:00.146: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-1983" for this suite. Feb 22 13:10:08.180: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 22 13:10:08.338: INFO: namespace deployment-1983 deletion completed in 8.185665409s • [SLOW TEST:45.521 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 22 13:10:08.339: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Feb 22 13:10:08.500: INFO: Creating deployment "nginx-deployment" Feb 22 13:10:08.507: INFO: Waiting for observed generation 1 Feb 22 13:10:10.766: INFO: Waiting for all required pods to come up Feb 22 13:10:12.834: INFO: Pod name nginx: Found 10 pods out of 10 STEP: ensuring each pod is running Feb 22 13:10:43.709: INFO: Waiting for deployment "nginx-deployment" to complete Feb 22 13:10:43.719: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:10, UpdatedReplicas:10, ReadyReplicas:9, AvailableReplicas:9, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717973843, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717973843, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717973843, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717973808, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"nginx-deployment-7b8c6f4498\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 22 13:10:45.741: INFO: Updating deployment "nginx-deployment" with a non-existent image Feb 22 13:10:45.756: INFO: Updating deployment nginx-deployment Feb 22 13:10:45.756: INFO: Waiting for observed generation 2 Feb 22 13:10:49.263: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Feb 22 13:10:49.701: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Feb 22 13:10:49.745: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Feb 22 13:10:50.514: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Feb 22 13:10:50.514: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Feb 22 13:10:50.518: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Feb 22 13:10:50.528: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas Feb 22 13:10:50.528: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30 Feb 22 13:10:50.544: INFO: Updating deployment nginx-deployment Feb 22 13:10:50.544: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas Feb 22 13:10:52.921: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Feb 22 13:10:55.410: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Feb 22 13:10:57.047: INFO: Deployment "nginx-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:deployment-8812,SelfLink:/apis/apps/v1/namespaces/deployment-8812/deployments/nginx-deployment,UID:95027767-c2c7-489e-bbd6-f351baaf92b0,ResourceVersion:25322620,Generation:3,CreationTimestamp:2020-02-22 13:10:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Available False 2020-02-22 13:10:52 +0000 UTC 2020-02-22 13:10:52 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-02-22 13:10:53 +0000 UTC 2020-02-22 13:10:08 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-55fb7cb77f" is progressing.}],ReadyReplicas:8,CollisionCount:nil,},} Feb 22 13:10:57.331: INFO: New ReplicaSet "nginx-deployment-55fb7cb77f" of Deployment "nginx-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f,GenerateName:,Namespace:deployment-8812,SelfLink:/apis/apps/v1/namespaces/deployment-8812/replicasets/nginx-deployment-55fb7cb77f,UID:9b466c79-318b-40ea-904f-e1c31452d8be,ResourceVersion:25322616,Generation:3,CreationTimestamp:2020-02-22 13:10:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 95027767-c2c7-489e-bbd6-f351baaf92b0 0xc000615697 0xc000615698}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Feb 22 13:10:57.331: INFO: All old ReplicaSets of Deployment "nginx-deployment": Feb 22 13:10:57.332: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498,GenerateName:,Namespace:deployment-8812,SelfLink:/apis/apps/v1/namespaces/deployment-8812/replicasets/nginx-deployment-7b8c6f4498,UID:8315bce5-b1ea-4d9b-9f5b-75c216aad72e,ResourceVersion:25322618,Generation:3,CreationTimestamp:2020-02-22 13:10:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 95027767-c2c7-489e-bbd6-f351baaf92b0 0xc000615767 0xc000615768}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},} Feb 22 13:10:58.304: INFO: Pod "nginx-deployment-55fb7cb77f-2r2tf" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-2r2tf,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8812,SelfLink:/api/v1/namespaces/deployment-8812/pods/nginx-deployment-55fb7cb77f-2r2tf,UID:e671cd00-de16-4dff-88e6-a58227fc3feb,ResourceVersion:25322596,Generation:0,CreationTimestamp:2020-02-22 13:10:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 9b466c79-318b-40ea-904f-e1c31452d8be 0xc0031be147 0xc0031be148}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7wcjt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7wcjt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-7wcjt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0031be1c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0031be250}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-22 13:10:53 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 22 13:10:58.304: INFO: Pod "nginx-deployment-55fb7cb77f-6xbmf" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-6xbmf,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8812,SelfLink:/api/v1/namespaces/deployment-8812/pods/nginx-deployment-55fb7cb77f-6xbmf,UID:014ac3d9-7f54-4d8c-9034-f6b13ce5d096,ResourceVersion:25322615,Generation:0,CreationTimestamp:2020-02-22 13:10:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 9b466c79-318b-40ea-904f-e1c31452d8be 0xc0031be2d7 0xc0031be2d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7wcjt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7wcjt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-7wcjt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0031be340} {node.kubernetes.io/unreachable Exists NoExecute 0xc0031be360}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-22 13:10:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-22 13:10:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-22 13:10:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-22 13:10:52 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-02-22 13:10:53 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 22 13:10:58.305: INFO: Pod "nginx-deployment-55fb7cb77f-9m466" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-9m466,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8812,SelfLink:/api/v1/namespaces/deployment-8812/pods/nginx-deployment-55fb7cb77f-9m466,UID:2b66b012-2502-48e4-9de9-e86ab1c32c44,ResourceVersion:25322543,Generation:0,CreationTimestamp:2020-02-22 13:10:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 9b466c79-318b-40ea-904f-e1c31452d8be 0xc0031be507 0xc0031be508}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7wcjt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7wcjt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-7wcjt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0031be5d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0031be5f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-22 13:10:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-22 13:10:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-22 13:10:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-22 13:10:47 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-02-22 13:10:47 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 22 13:10:58.305: INFO: Pod "nginx-deployment-55fb7cb77f-b26q4" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-b26q4,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8812,SelfLink:/api/v1/namespaces/deployment-8812/pods/nginx-deployment-55fb7cb77f-b26q4,UID:d70fc01c-d0a7-434d-bbc0-c77798383a46,ResourceVersion:25322568,Generation:0,CreationTimestamp:2020-02-22 13:10:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 9b466c79-318b-40ea-904f-e1c31452d8be 0xc0031be787 0xc0031be788}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7wcjt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7wcjt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-7wcjt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0031be850} {node.kubernetes.io/unreachable Exists NoExecute 0xc0031be870}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-22 13:10:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-22 13:10:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-22 13:10:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-22 13:10:47 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-02-22 13:10:51 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 22 13:10:58.305: INFO: Pod "nginx-deployment-55fb7cb77f-fmwct" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-fmwct,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8812,SelfLink:/api/v1/namespaces/deployment-8812/pods/nginx-deployment-55fb7cb77f-fmwct,UID:7cac1031-3e4b-4fc3-9c90-19a8c7132e43,ResourceVersion:25322599,Generation:0,CreationTimestamp:2020-02-22 13:10:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 9b466c79-318b-40ea-904f-e1c31452d8be 0xc0031bea07 0xc0031bea08}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7wcjt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7wcjt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-7wcjt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0031beb50} {node.kubernetes.io/unreachable Exists NoExecute 0xc0031beb70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-22 13:10:53 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 22 13:10:58.305: INFO: Pod "nginx-deployment-55fb7cb77f-nb6w2" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-nb6w2,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8812,SelfLink:/api/v1/namespaces/deployment-8812/pods/nginx-deployment-55fb7cb77f-nb6w2,UID:10f8ef78-0a27-402a-9eb6-6850e5ead272,ResourceVersion:25322594,Generation:0,CreationTimestamp:2020-02-22 13:10:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 9b466c79-318b-40ea-904f-e1c31452d8be 0xc0031bec47 0xc0031bec48}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7wcjt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7wcjt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-7wcjt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0031becf0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0031bed10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-22 13:10:53 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 22 13:10:58.305: INFO: Pod "nginx-deployment-55fb7cb77f-nl5xx" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-nl5xx,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8812,SelfLink:/api/v1/namespaces/deployment-8812/pods/nginx-deployment-55fb7cb77f-nl5xx,UID:53197828-c49c-4fb3-90cf-6fb8fea4119a,ResourceVersion:25322607,Generation:0,CreationTimestamp:2020-02-22 13:10:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 9b466c79-318b-40ea-904f-e1c31452d8be 0xc0031bee17 0xc0031bee18}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7wcjt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7wcjt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-7wcjt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0031beef0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0031bef10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-22 13:10:53 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 22 13:10:58.306: INFO: Pod "nginx-deployment-55fb7cb77f-nr2pf" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-nr2pf,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8812,SelfLink:/api/v1/namespaces/deployment-8812/pods/nginx-deployment-55fb7cb77f-nr2pf,UID:1f9d2316-be1e-4e0c-9a42-9c8f83130332,ResourceVersion:25322625,Generation:0,CreationTimestamp:2020-02-22 13:10:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 9b466c79-318b-40ea-904f-e1c31452d8be 0xc0031bf007 0xc0031bf008}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7wcjt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7wcjt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-7wcjt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0031bf0a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0031bf0c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-22 13:10:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-22 13:10:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-22 13:10:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-22 13:10:53 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-02-22 13:10:53 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 22 13:10:58.306: INFO: Pod "nginx-deployment-55fb7cb77f-p5w7f" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-p5w7f,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8812,SelfLink:/api/v1/namespaces/deployment-8812/pods/nginx-deployment-55fb7cb77f-p5w7f,UID:054e4f11-6a54-4db5-80e9-32b5a3e3d6cc,ResourceVersion:25322548,Generation:0,CreationTimestamp:2020-02-22 13:10:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 9b466c79-318b-40ea-904f-e1c31452d8be 0xc0031bf227 0xc0031bf228}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7wcjt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7wcjt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-7wcjt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0031bf360} {node.kubernetes.io/unreachable Exists NoExecute 0xc0031bf380}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-22 13:10:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-22 13:10:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-22 13:10:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-22 13:10:45 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-02-22 13:10:46 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 22 13:10:58.306: INFO: Pod "nginx-deployment-55fb7cb77f-q8h6r" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-q8h6r,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8812,SelfLink:/api/v1/namespaces/deployment-8812/pods/nginx-deployment-55fb7cb77f-q8h6r,UID:7b2d751e-ae86-4559-a464-2df357b7980e,ResourceVersion:25322583,Generation:0,CreationTimestamp:2020-02-22 13:10:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 9b466c79-318b-40ea-904f-e1c31452d8be 0xc0031bf4c7 0xc0031bf4c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7wcjt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7wcjt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-7wcjt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0031bf590} {node.kubernetes.io/unreachable Exists NoExecute 0xc0031bf5b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-22 13:10:53 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 22 13:10:58.306: INFO: Pod "nginx-deployment-55fb7cb77f-qpxnf" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-qpxnf,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8812,SelfLink:/api/v1/namespaces/deployment-8812/pods/nginx-deployment-55fb7cb77f-qpxnf,UID:b8c26194-b842-4661-b442-a0154e267ef6,ResourceVersion:25322533,Generation:0,CreationTimestamp:2020-02-22 13:10:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 9b466c79-318b-40ea-904f-e1c31452d8be 0xc0031bf667 0xc0031bf668}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7wcjt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7wcjt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-7wcjt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0031bf710} {node.kubernetes.io/unreachable Exists NoExecute 0xc0031bf7d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-22 13:10:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-22 13:10:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-22 13:10:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-22 13:10:45 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-02-22 13:10:46 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 22 13:10:58.306: INFO: Pod "nginx-deployment-55fb7cb77f-tsn6q" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-tsn6q,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8812,SelfLink:/api/v1/namespaces/deployment-8812/pods/nginx-deployment-55fb7cb77f-tsn6q,UID:45cd03d3-36d0-4404-bbac-55516b65f86b,ResourceVersion:25322532,Generation:0,CreationTimestamp:2020-02-22 13:10:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 9b466c79-318b-40ea-904f-e1c31452d8be 0xc0031bf8f7 0xc0031bf8f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7wcjt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7wcjt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-7wcjt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0031bf9c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0031bf9e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-22 13:10:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-22 13:10:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-22 13:10:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-22 13:10:45 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-02-22 13:10:46 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 22 13:10:58.307: INFO: Pod "nginx-deployment-55fb7cb77f-wcntd" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-wcntd,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8812,SelfLink:/api/v1/namespaces/deployment-8812/pods/nginx-deployment-55fb7cb77f-wcntd,UID:d34e4b3f-395d-4df8-8154-a31dcd131dc5,ResourceVersion:25322586,Generation:0,CreationTimestamp:2020-02-22 13:10:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 9b466c79-318b-40ea-904f-e1c31452d8be 0xc0031bfab7 0xc0031bfab8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7wcjt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7wcjt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-7wcjt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0031bfb30} {node.kubernetes.io/unreachable Exists NoExecute 0xc0031bfb50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-22 13:10:53 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 22 13:10:58.307: INFO: Pod "nginx-deployment-7b8c6f4498-74wrn" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-74wrn,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8812,SelfLink:/api/v1/namespaces/deployment-8812/pods/nginx-deployment-7b8c6f4498-74wrn,UID:ace573ad-aa1e-425a-ac8d-858b66f5bcd7,ResourceVersion:25322600,Generation:0,CreationTimestamp:2020-02-22 13:10:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8315bce5-b1ea-4d9b-9f5b-75c216aad72e 0xc0031bfbe7 0xc0031bfbe8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7wcjt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7wcjt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-7wcjt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0031bfc60} {node.kubernetes.io/unreachable Exists NoExecute 0xc0031bfc80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-22 13:10:53 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 22 13:10:58.307: INFO: Pod "nginx-deployment-7b8c6f4498-7xmc7" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-7xmc7,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8812,SelfLink:/api/v1/namespaces/deployment-8812/pods/nginx-deployment-7b8c6f4498-7xmc7,UID:dc8db882-feda-465d-b574-74dd448309f1,ResourceVersion:25322595,Generation:0,CreationTimestamp:2020-02-22 13:10:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8315bce5-b1ea-4d9b-9f5b-75c216aad72e 0xc0031bfd07 0xc0031bfd08}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7wcjt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7wcjt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-7wcjt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0031bfd80} {node.kubernetes.io/unreachable Exists NoExecute 0xc0031bfda0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-22 13:10:53 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 22 13:10:58.307: INFO: Pod "nginx-deployment-7b8c6f4498-8l8jk" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-8l8jk,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8812,SelfLink:/api/v1/namespaces/deployment-8812/pods/nginx-deployment-7b8c6f4498-8l8jk,UID:4b052798-e56d-48f0-9d83-46c11f0e5afc,ResourceVersion:25322630,Generation:0,CreationTimestamp:2020-02-22 13:10:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8315bce5-b1ea-4d9b-9f5b-75c216aad72e 0xc0031bfe27 0xc0031bfe28}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7wcjt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7wcjt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-7wcjt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0031bfe90} {node.kubernetes.io/unreachable Exists NoExecute 0xc0031bfeb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-22 13:10:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-22 13:10:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-22 13:10:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-22 13:10:53 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-02-22 13:10:53 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 22 13:10:58.307: INFO: Pod "nginx-deployment-7b8c6f4498-9ddzq" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-9ddzq,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8812,SelfLink:/api/v1/namespaces/deployment-8812/pods/nginx-deployment-7b8c6f4498-9ddzq,UID:054f61fe-8ee0-45ec-bf4a-1c3815988de8,ResourceVersion:25322468,Generation:0,CreationTimestamp:2020-02-22 13:10:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8315bce5-b1ea-4d9b-9f5b-75c216aad72e 0xc0031bff77 0xc0031bff78}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7wcjt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7wcjt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-7wcjt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0031bffe0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00157c000}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-22 13:10:11 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-22 13:10:41 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-22 13:10:41 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-22 13:10:08 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.8,StartTime:2020-02-22 13:10:11 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-22 13:10:40 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://06f5f76a3e3b8700a4bec80e0e1791f35f7689a144128be9fa627de34a7f0f1d}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 22 13:10:58.308: INFO: Pod "nginx-deployment-7b8c6f4498-m9jtg" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-m9jtg,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8812,SelfLink:/api/v1/namespaces/deployment-8812/pods/nginx-deployment-7b8c6f4498-m9jtg,UID:3b50e3b1-1866-422f-981e-d3f9c1f01f60,ResourceVersion:25322473,Generation:0,CreationTimestamp:2020-02-22 13:10:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8315bce5-b1ea-4d9b-9f5b-75c216aad72e 0xc00157c0d7 0xc00157c0d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7wcjt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7wcjt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-7wcjt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00157c140} {node.kubernetes.io/unreachable Exists NoExecute 0xc00157c160}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-22 13:10:09 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-22 13:10:41 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-22 13:10:41 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-22 13:10:08 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.6,StartTime:2020-02-22 13:10:09 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-22 13:10:40 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://2819ec6eddc055d2a2549b8ed2974c72a4659da81125616991e44416ec3829b1}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 22 13:10:58.308: INFO: Pod "nginx-deployment-7b8c6f4498-md8nc" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-md8nc,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8812,SelfLink:/api/v1/namespaces/deployment-8812/pods/nginx-deployment-7b8c6f4498-md8nc,UID:141a2623-a615-4d4a-adee-0ab7ca36f86e,ResourceVersion:25322610,Generation:0,CreationTimestamp:2020-02-22 13:10:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8315bce5-b1ea-4d9b-9f5b-75c216aad72e 0xc00157c237 0xc00157c238}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7wcjt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7wcjt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-7wcjt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00157c2b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00157c2d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-22 13:10:53 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 22 13:10:58.308: INFO: Pod "nginx-deployment-7b8c6f4498-njl4m" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-njl4m,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8812,SelfLink:/api/v1/namespaces/deployment-8812/pods/nginx-deployment-7b8c6f4498-njl4m,UID:d9583423-a3ee-495d-8240-2b5de55d1945,ResourceVersion:25322452,Generation:0,CreationTimestamp:2020-02-22 13:10:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8315bce5-b1ea-4d9b-9f5b-75c216aad72e 0xc00157c357 0xc00157c358}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7wcjt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7wcjt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-7wcjt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00157c3c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00157c3e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-22 13:10:09 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-22 13:10:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-22 13:10:40 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-22 13:10:08 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.5,StartTime:2020-02-22 13:10:09 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-22 13:10:39 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://82a937c38be6ba2d73994f6548023027fd287b46bfc601ec8c62c59e77c5382f}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 22 13:10:58.308: INFO: Pod "nginx-deployment-7b8c6f4498-ptj4n" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-ptj4n,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8812,SelfLink:/api/v1/namespaces/deployment-8812/pods/nginx-deployment-7b8c6f4498-ptj4n,UID:ccb44a5e-3fa4-457d-a0fb-dc97cf3dd18b,ResourceVersion:25322575,Generation:0,CreationTimestamp:2020-02-22 13:10:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8315bce5-b1ea-4d9b-9f5b-75c216aad72e 0xc00157c4b7 0xc00157c4b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7wcjt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7wcjt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-7wcjt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00157c530} {node.kubernetes.io/unreachable Exists NoExecute 0xc00157c550}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-22 13:10:53 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 22 13:10:58.309: INFO: Pod "nginx-deployment-7b8c6f4498-q8k4v" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-q8k4v,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8812,SelfLink:/api/v1/namespaces/deployment-8812/pods/nginx-deployment-7b8c6f4498-q8k4v,UID:a67b8fc0-8169-44b1-80e2-c3958fb6dce2,ResourceVersion:25322479,Generation:0,CreationTimestamp:2020-02-22 13:10:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8315bce5-b1ea-4d9b-9f5b-75c216aad72e 0xc00157c5d7 0xc00157c5d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7wcjt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7wcjt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-7wcjt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00157c650} {node.kubernetes.io/unreachable Exists NoExecute 0xc00157c670}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-22 13:10:11 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-22 13:10:42 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-22 13:10:42 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-22 13:10:08 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.5,StartTime:2020-02-22 13:10:11 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-22 13:10:41 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://e703cd88f8f8cbe9e12ce8ab49fe9afca0c956b89ad47174252216aee0515f6b}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 22 13:10:58.309: INFO: Pod "nginx-deployment-7b8c6f4498-tbtl4" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-tbtl4,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8812,SelfLink:/api/v1/namespaces/deployment-8812/pods/nginx-deployment-7b8c6f4498-tbtl4,UID:677e8be5-42b7-4814-88aa-b9ec17c99d0d,ResourceVersion:25322449,Generation:0,CreationTimestamp:2020-02-22 13:10:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8315bce5-b1ea-4d9b-9f5b-75c216aad72e 0xc00157c747 0xc00157c748}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7wcjt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7wcjt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-7wcjt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00157c7b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00157c7d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-22 13:10:08 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-22 13:10:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-22 13:10:40 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-22 13:10:08 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.4,StartTime:2020-02-22 13:10:08 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-22 13:10:37 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://1fe23d72868f73621cd3b377c7741f24cd0b282803163ed515a2a1d5dee08758}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 22 13:10:58.310: INFO: Pod "nginx-deployment-7b8c6f4498-tcpdt" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-tcpdt,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8812,SelfLink:/api/v1/namespaces/deployment-8812/pods/nginx-deployment-7b8c6f4498-tcpdt,UID:ec675d51-1483-4157-908e-455c7a0b790a,ResourceVersion:25322573,Generation:0,CreationTimestamp:2020-02-22 13:10:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8315bce5-b1ea-4d9b-9f5b-75c216aad72e 0xc00157c8a7 0xc00157c8a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7wcjt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7wcjt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-7wcjt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00157c920} {node.kubernetes.io/unreachable Exists NoExecute 0xc00157c940}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-22 13:10:53 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 22 13:10:58.310: INFO: Pod "nginx-deployment-7b8c6f4498-vvcth" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-vvcth,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8812,SelfLink:/api/v1/namespaces/deployment-8812/pods/nginx-deployment-7b8c6f4498-vvcth,UID:abf29836-9d6b-4d3c-95d7-9271aa43f61d,ResourceVersion:25322601,Generation:0,CreationTimestamp:2020-02-22 13:10:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8315bce5-b1ea-4d9b-9f5b-75c216aad72e 0xc00157c9c7 0xc00157c9c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7wcjt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7wcjt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-7wcjt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00157ca40} {node.kubernetes.io/unreachable Exists NoExecute 0xc00157ca60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-22 13:10:53 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 22 13:10:58.310: INFO: Pod "nginx-deployment-7b8c6f4498-wk9qc" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-wk9qc,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8812,SelfLink:/api/v1/namespaces/deployment-8812/pods/nginx-deployment-7b8c6f4498-wk9qc,UID:355828c2-073b-4671-ba14-32fc0ef426d4,ResourceVersion:25322484,Generation:0,CreationTimestamp:2020-02-22 13:10:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8315bce5-b1ea-4d9b-9f5b-75c216aad72e 0xc00157cae7 0xc00157cae8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7wcjt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7wcjt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-7wcjt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00157cb60} {node.kubernetes.io/unreachable Exists NoExecute 0xc00157cb80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-22 13:10:10 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-22 13:10:43 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-22 13:10:43 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-22 13:10:08 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2020-02-22 13:10:10 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-22 13:10:40 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://212f61cc3d1af8f869b28c7d1547dc0e7dbdcfb254cf04be8387bdcd8106fc3f}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 22 13:10:58.311: INFO: Pod "nginx-deployment-7b8c6f4498-wms6c" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-wms6c,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8812,SelfLink:/api/v1/namespaces/deployment-8812/pods/nginx-deployment-7b8c6f4498-wms6c,UID:64cadbc2-d15a-450f-a380-83c4cc48f6f9,ResourceVersion:25322609,Generation:0,CreationTimestamp:2020-02-22 13:10:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8315bce5-b1ea-4d9b-9f5b-75c216aad72e 0xc00157cc67 0xc00157cc68}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7wcjt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7wcjt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-7wcjt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00157ccd0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00157ccf0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-22 13:10:53 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 22 13:10:58.311: INFO: Pod "nginx-deployment-7b8c6f4498-wzqcg" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-wzqcg,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8812,SelfLink:/api/v1/namespaces/deployment-8812/pods/nginx-deployment-7b8c6f4498-wzqcg,UID:d03ab715-b5f2-4647-9175-ba7a767fd818,ResourceVersion:25322464,Generation:0,CreationTimestamp:2020-02-22 13:10:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8315bce5-b1ea-4d9b-9f5b-75c216aad72e 0xc00157cd77 0xc00157cd78}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7wcjt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7wcjt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-7wcjt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00157cdf0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00157ce10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-22 13:10:09 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-22 13:10:41 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-22 13:10:41 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-22 13:10:08 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.7,StartTime:2020-02-22 13:10:09 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-22 13:10:40 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://1dd581ee269af48a9f5b27c5f2c5f2e0507927c12e51e82fb2da6c7d13c02672}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 22 13:10:58.311: INFO: Pod "nginx-deployment-7b8c6f4498-xxf2r" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-xxf2r,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8812,SelfLink:/api/v1/namespaces/deployment-8812/pods/nginx-deployment-7b8c6f4498-xxf2r,UID:518c0389-ee51-4ab9-9d46-0669c3fd7b6e,ResourceVersion:25322608,Generation:0,CreationTimestamp:2020-02-22 13:10:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8315bce5-b1ea-4d9b-9f5b-75c216aad72e 0xc00157cee7 0xc00157cee8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7wcjt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7wcjt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-7wcjt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00157cf60} {node.kubernetes.io/unreachable Exists NoExecute 0xc00157cf80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-22 13:10:53 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 22 13:10:58.312: INFO: Pod "nginx-deployment-7b8c6f4498-z9qdp" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-z9qdp,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8812,SelfLink:/api/v1/namespaces/deployment-8812/pods/nginx-deployment-7b8c6f4498-z9qdp,UID:88ea678c-9af5-477b-90d8-3d8c81fb39b6,ResourceVersion:25322474,Generation:0,CreationTimestamp:2020-02-22 13:10:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8315bce5-b1ea-4d9b-9f5b-75c216aad72e 0xc00157d007 0xc00157d008}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7wcjt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7wcjt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-7wcjt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00157d090} {node.kubernetes.io/unreachable Exists NoExecute 0xc00157d0b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-22 13:10:12 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-22 13:10:42 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-22 13:10:42 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-22 13:10:08 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.4,StartTime:2020-02-22 13:10:12 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-22 13:10:41 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://b276fb4136a05308a369009df844c400652b52066e4d054d2d464fd91bc4d4b6}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 22 13:10:58.312: INFO: Pod "nginx-deployment-7b8c6f4498-zbhp2" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-zbhp2,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8812,SelfLink:/api/v1/namespaces/deployment-8812/pods/nginx-deployment-7b8c6f4498-zbhp2,UID:3433b6ac-88ec-4972-aa35-4b6115160c64,ResourceVersion:25322606,Generation:0,CreationTimestamp:2020-02-22 13:10:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8315bce5-b1ea-4d9b-9f5b-75c216aad72e 0xc00157d187 0xc00157d188}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7wcjt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7wcjt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-7wcjt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00157d200} {node.kubernetes.io/unreachable Exists NoExecute 0xc00157d220}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-22 13:10:53 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 22 13:10:58.312: INFO: Pod "nginx-deployment-7b8c6f4498-zpp94" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-zpp94,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8812,SelfLink:/api/v1/namespaces/deployment-8812/pods/nginx-deployment-7b8c6f4498-zpp94,UID:b249a874-caf6-4d82-b6db-f0655eb1fb01,ResourceVersion:25322631,Generation:0,CreationTimestamp:2020-02-22 13:10:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8315bce5-b1ea-4d9b-9f5b-75c216aad72e 0xc00157d2a7 0xc00157d2a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7wcjt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7wcjt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-7wcjt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00157d320} {node.kubernetes.io/unreachable Exists NoExecute 0xc00157d340}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-22 13:10:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-22 13:10:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-22 13:10:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-22 13:10:52 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-02-22 13:10:55 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 22 13:10:58.312: INFO: Pod "nginx-deployment-7b8c6f4498-zsnst" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-zsnst,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8812,SelfLink:/api/v1/namespaces/deployment-8812/pods/nginx-deployment-7b8c6f4498-zsnst,UID:b40370df-dd1b-42da-afa2-6276b0e7179d,ResourceVersion:25322605,Generation:0,CreationTimestamp:2020-02-22 13:10:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8315bce5-b1ea-4d9b-9f5b-75c216aad72e 0xc00157d417 0xc00157d418}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7wcjt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7wcjt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-7wcjt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00157d480} {node.kubernetes.io/unreachable Exists NoExecute 0xc00157d4a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-22 13:10:53 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 22 13:10:58.312: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-8812" for this suite. Feb 22 13:12:35.259: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 22 13:12:35.608: INFO: namespace deployment-8812 deletion completed in 1m34.615838922s • [SLOW TEST:147.269 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 22 13:12:35.608: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Feb 22 13:12:35.876: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b9e39c5f-aedd-4cee-98eb-ed9c844a3c29" in namespace "downward-api-9669" to be "success or failure" Feb 22 13:12:36.014: INFO: Pod "downwardapi-volume-b9e39c5f-aedd-4cee-98eb-ed9c844a3c29": Phase="Pending", Reason="", readiness=false. Elapsed: 137.69237ms Feb 22 13:12:38.023: INFO: Pod "downwardapi-volume-b9e39c5f-aedd-4cee-98eb-ed9c844a3c29": Phase="Pending", Reason="", readiness=false. Elapsed: 2.146270496s Feb 22 13:12:40.034: INFO: Pod "downwardapi-volume-b9e39c5f-aedd-4cee-98eb-ed9c844a3c29": Phase="Pending", Reason="", readiness=false. Elapsed: 4.1569996s Feb 22 13:12:42.060: INFO: Pod "downwardapi-volume-b9e39c5f-aedd-4cee-98eb-ed9c844a3c29": Phase="Pending", Reason="", readiness=false. Elapsed: 6.182768867s Feb 22 13:12:44.079: INFO: Pod "downwardapi-volume-b9e39c5f-aedd-4cee-98eb-ed9c844a3c29": Phase="Pending", Reason="", readiness=false. Elapsed: 8.201940192s Feb 22 13:12:46.106: INFO: Pod "downwardapi-volume-b9e39c5f-aedd-4cee-98eb-ed9c844a3c29": Phase="Pending", Reason="", readiness=false. Elapsed: 10.229505199s Feb 22 13:12:48.114: INFO: Pod "downwardapi-volume-b9e39c5f-aedd-4cee-98eb-ed9c844a3c29": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.237301828s STEP: Saw pod success Feb 22 13:12:48.114: INFO: Pod "downwardapi-volume-b9e39c5f-aedd-4cee-98eb-ed9c844a3c29" satisfied condition "success or failure" Feb 22 13:12:48.118: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-b9e39c5f-aedd-4cee-98eb-ed9c844a3c29 container client-container: STEP: delete the pod Feb 22 13:12:48.387: INFO: Waiting for pod downwardapi-volume-b9e39c5f-aedd-4cee-98eb-ed9c844a3c29 to disappear Feb 22 13:12:48.446: INFO: Pod downwardapi-volume-b9e39c5f-aedd-4cee-98eb-ed9c844a3c29 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 22 13:12:48.447: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9669" for this suite. Feb 22 13:12:54.483: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 22 13:12:54.585: INFO: namespace downward-api-9669 deletion completed in 6.129242864s • [SLOW TEST:18.976 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 22 13:12:54.585: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating Redis RC Feb 22 13:12:54.736: INFO: namespace kubectl-6439 Feb 22 13:12:54.737: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6439' Feb 22 13:12:57.525: INFO: stderr: "" Feb 22 13:12:57.525: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Feb 22 13:12:58.546: INFO: Selector matched 1 pods for map[app:redis] Feb 22 13:12:58.546: INFO: Found 0 / 1 Feb 22 13:12:59.536: INFO: Selector matched 1 pods for map[app:redis] Feb 22 13:12:59.536: INFO: Found 0 / 1 Feb 22 13:13:00.537: INFO: Selector matched 1 pods for map[app:redis] Feb 22 13:13:00.537: INFO: Found 0 / 1 Feb 22 13:13:01.549: INFO: Selector matched 1 pods for map[app:redis] Feb 22 13:13:01.549: INFO: Found 0 / 1 Feb 22 13:13:02.538: INFO: Selector matched 1 pods for map[app:redis] Feb 22 13:13:02.539: INFO: Found 0 / 1 Feb 22 13:13:03.532: INFO: Selector matched 1 pods for map[app:redis] Feb 22 13:13:03.532: INFO: Found 0 / 1 Feb 22 13:13:04.539: INFO: Selector matched 1 pods for map[app:redis] Feb 22 13:13:04.539: INFO: Found 0 / 1 Feb 22 13:13:05.541: INFO: Selector matched 1 pods for map[app:redis] Feb 22 13:13:05.541: INFO: Found 0 / 1 Feb 22 13:13:06.547: INFO: Selector matched 1 pods for map[app:redis] Feb 22 13:13:06.548: INFO: Found 0 / 1 Feb 22 13:13:07.599: INFO: Selector matched 1 pods for map[app:redis] Feb 22 13:13:07.599: INFO: Found 0 / 1 Feb 22 13:13:08.535: INFO: Selector matched 1 pods for map[app:redis] Feb 22 13:13:08.535: INFO: Found 1 / 1 Feb 22 13:13:08.535: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Feb 22 13:13:08.539: INFO: Selector matched 1 pods for map[app:redis] Feb 22 13:13:08.539: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Feb 22 13:13:08.539: INFO: wait on redis-master startup in kubectl-6439 Feb 22 13:13:08.539: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-wblnx redis-master --namespace=kubectl-6439' Feb 22 13:13:08.734: INFO: stderr: "" Feb 22 13:13:08.734: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 22 Feb 13:13:07.224 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 22 Feb 13:13:07.225 # Server started, Redis version 3.2.12\n1:M 22 Feb 13:13:07.225 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 22 Feb 13:13:07.225 * The server is now ready to accept connections on port 6379\n" STEP: exposing RC Feb 22 13:13:08.734: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-6439' Feb 22 13:13:08.923: INFO: stderr: "" Feb 22 13:13:08.924: INFO: stdout: "service/rm2 exposed\n" Feb 22 13:13:09.043: INFO: Service rm2 in namespace kubectl-6439 found. STEP: exposing service Feb 22 13:13:11.057: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-6439' Feb 22 13:13:11.411: INFO: stderr: "" Feb 22 13:13:11.412: INFO: stdout: "service/rm3 exposed\n" Feb 22 13:13:11.474: INFO: Service rm3 in namespace kubectl-6439 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 22 13:13:13.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6439" for this suite. Feb 22 13:13:37.522: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 22 13:13:37.680: INFO: namespace kubectl-6439 deletion completed in 24.188456471s • [SLOW TEST:43.094 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 22 13:13:37.680: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override arguments Feb 22 13:13:37.801: INFO: Waiting up to 5m0s for pod "client-containers-62e7f432-2ac8-46a5-a055-51028e1e5f73" in namespace "containers-8040" to be "success or failure" Feb 22 13:13:37.828: INFO: Pod "client-containers-62e7f432-2ac8-46a5-a055-51028e1e5f73": Phase="Pending", Reason="", readiness=false. Elapsed: 27.717678ms Feb 22 13:13:39.855: INFO: Pod "client-containers-62e7f432-2ac8-46a5-a055-51028e1e5f73": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054034809s Feb 22 13:13:41.880: INFO: Pod "client-containers-62e7f432-2ac8-46a5-a055-51028e1e5f73": Phase="Pending", Reason="", readiness=false. Elapsed: 4.07950152s Feb 22 13:13:43.954: INFO: Pod "client-containers-62e7f432-2ac8-46a5-a055-51028e1e5f73": Phase="Pending", Reason="", readiness=false. Elapsed: 6.153045011s Feb 22 13:13:45.990: INFO: Pod "client-containers-62e7f432-2ac8-46a5-a055-51028e1e5f73": Phase="Pending", Reason="", readiness=false. Elapsed: 8.189137344s Feb 22 13:13:48.000: INFO: Pod "client-containers-62e7f432-2ac8-46a5-a055-51028e1e5f73": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.199680776s STEP: Saw pod success Feb 22 13:13:48.001: INFO: Pod "client-containers-62e7f432-2ac8-46a5-a055-51028e1e5f73" satisfied condition "success or failure" Feb 22 13:13:48.006: INFO: Trying to get logs from node iruya-node pod client-containers-62e7f432-2ac8-46a5-a055-51028e1e5f73 container test-container: STEP: delete the pod Feb 22 13:13:48.124: INFO: Waiting for pod client-containers-62e7f432-2ac8-46a5-a055-51028e1e5f73 to disappear Feb 22 13:13:48.133: INFO: Pod client-containers-62e7f432-2ac8-46a5-a055-51028e1e5f73 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 22 13:13:48.133: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-8040" for this suite. Feb 22 13:13:54.235: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 22 13:13:54.395: INFO: namespace containers-8040 deletion completed in 6.199192335s • [SLOW TEST:16.715 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 22 13:13:54.396: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-d8502bbf-f1db-43c4-920a-1ec94f36597d STEP: Creating a pod to test consume secrets Feb 22 13:13:54.571: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-54eaf26c-9884-4a02-a3f8-bac2a68c36ce" in namespace "projected-1619" to be "success or failure" Feb 22 13:13:54.696: INFO: Pod "pod-projected-secrets-54eaf26c-9884-4a02-a3f8-bac2a68c36ce": Phase="Pending", Reason="", readiness=false. Elapsed: 124.926825ms Feb 22 13:13:56.788: INFO: Pod "pod-projected-secrets-54eaf26c-9884-4a02-a3f8-bac2a68c36ce": Phase="Pending", Reason="", readiness=false. Elapsed: 2.216665037s Feb 22 13:13:58.808: INFO: Pod "pod-projected-secrets-54eaf26c-9884-4a02-a3f8-bac2a68c36ce": Phase="Pending", Reason="", readiness=false. Elapsed: 4.236504433s Feb 22 13:14:00.823: INFO: Pod "pod-projected-secrets-54eaf26c-9884-4a02-a3f8-bac2a68c36ce": Phase="Pending", Reason="", readiness=false. Elapsed: 6.251622374s Feb 22 13:14:02.836: INFO: Pod "pod-projected-secrets-54eaf26c-9884-4a02-a3f8-bac2a68c36ce": Phase="Pending", Reason="", readiness=false. Elapsed: 8.264793175s Feb 22 13:14:04.846: INFO: Pod "pod-projected-secrets-54eaf26c-9884-4a02-a3f8-bac2a68c36ce": Phase="Pending", Reason="", readiness=false. Elapsed: 10.274711469s Feb 22 13:14:06.867: INFO: Pod "pod-projected-secrets-54eaf26c-9884-4a02-a3f8-bac2a68c36ce": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.295707096s STEP: Saw pod success Feb 22 13:14:06.868: INFO: Pod "pod-projected-secrets-54eaf26c-9884-4a02-a3f8-bac2a68c36ce" satisfied condition "success or failure" Feb 22 13:14:06.876: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-54eaf26c-9884-4a02-a3f8-bac2a68c36ce container projected-secret-volume-test: STEP: delete the pod Feb 22 13:14:07.076: INFO: Waiting for pod pod-projected-secrets-54eaf26c-9884-4a02-a3f8-bac2a68c36ce to disappear Feb 22 13:14:07.081: INFO: Pod pod-projected-secrets-54eaf26c-9884-4a02-a3f8-bac2a68c36ce no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 22 13:14:07.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1619" for this suite. Feb 22 13:14:13.141: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 22 13:14:13.232: INFO: namespace projected-1619 deletion completed in 6.143554264s • [SLOW TEST:18.835 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 22 13:14:13.232: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap configmap-2795/configmap-test-c4da888a-0d66-4fc9-a734-d10d41f99303 STEP: Creating a pod to test consume configMaps Feb 22 13:14:13.419: INFO: Waiting up to 5m0s for pod "pod-configmaps-190a0221-55af-480b-b30a-0ea9dc05de2b" in namespace "configmap-2795" to be "success or failure" Feb 22 13:14:13.525: INFO: Pod "pod-configmaps-190a0221-55af-480b-b30a-0ea9dc05de2b": Phase="Pending", Reason="", readiness=false. Elapsed: 105.84148ms Feb 22 13:14:16.076: INFO: Pod "pod-configmaps-190a0221-55af-480b-b30a-0ea9dc05de2b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.656679755s Feb 22 13:14:18.086: INFO: Pod "pod-configmaps-190a0221-55af-480b-b30a-0ea9dc05de2b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.666505336s Feb 22 13:14:20.092: INFO: Pod "pod-configmaps-190a0221-55af-480b-b30a-0ea9dc05de2b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.673144846s Feb 22 13:14:22.097: INFO: Pod "pod-configmaps-190a0221-55af-480b-b30a-0ea9dc05de2b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.677802167s Feb 22 13:14:24.102: INFO: Pod "pod-configmaps-190a0221-55af-480b-b30a-0ea9dc05de2b": Phase="Pending", Reason="", readiness=false. Elapsed: 10.682818238s Feb 22 13:14:26.111: INFO: Pod "pod-configmaps-190a0221-55af-480b-b30a-0ea9dc05de2b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.691727878s STEP: Saw pod success Feb 22 13:14:26.111: INFO: Pod "pod-configmaps-190a0221-55af-480b-b30a-0ea9dc05de2b" satisfied condition "success or failure" Feb 22 13:14:26.114: INFO: Trying to get logs from node iruya-node pod pod-configmaps-190a0221-55af-480b-b30a-0ea9dc05de2b container env-test: STEP: delete the pod Feb 22 13:14:26.197: INFO: Waiting for pod pod-configmaps-190a0221-55af-480b-b30a-0ea9dc05de2b to disappear Feb 22 13:14:26.245: INFO: Pod pod-configmaps-190a0221-55af-480b-b30a-0ea9dc05de2b no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 22 13:14:26.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2795" for this suite. Feb 22 13:14:32.331: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 22 13:14:32.528: INFO: namespace configmap-2795 deletion completed in 6.277124308s • [SLOW TEST:19.296 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 22 13:14:32.529: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on tmpfs Feb 22 13:14:32.671: INFO: Waiting up to 5m0s for pod "pod-32d9ee59-5a50-4216-8478-68d67719c71f" in namespace "emptydir-5713" to be "success or failure" Feb 22 13:14:32.681: INFO: Pod "pod-32d9ee59-5a50-4216-8478-68d67719c71f": Phase="Pending", Reason="", readiness=false. Elapsed: 9.885152ms Feb 22 13:14:34.695: INFO: Pod "pod-32d9ee59-5a50-4216-8478-68d67719c71f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023297172s Feb 22 13:14:36.886: INFO: Pod "pod-32d9ee59-5a50-4216-8478-68d67719c71f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.214122262s Feb 22 13:14:38.894: INFO: Pod "pod-32d9ee59-5a50-4216-8478-68d67719c71f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.22266828s Feb 22 13:14:40.903: INFO: Pod "pod-32d9ee59-5a50-4216-8478-68d67719c71f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.231482925s Feb 22 13:14:42.923: INFO: Pod "pod-32d9ee59-5a50-4216-8478-68d67719c71f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.251320529s STEP: Saw pod success Feb 22 13:14:42.924: INFO: Pod "pod-32d9ee59-5a50-4216-8478-68d67719c71f" satisfied condition "success or failure" Feb 22 13:14:42.947: INFO: Trying to get logs from node iruya-node pod pod-32d9ee59-5a50-4216-8478-68d67719c71f container test-container: STEP: delete the pod Feb 22 13:14:43.209: INFO: Waiting for pod pod-32d9ee59-5a50-4216-8478-68d67719c71f to disappear Feb 22 13:14:43.226: INFO: Pod pod-32d9ee59-5a50-4216-8478-68d67719c71f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 22 13:14:43.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5713" for this suite. Feb 22 13:14:49.290: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 22 13:14:49.399: INFO: namespace emptydir-5713 deletion completed in 6.168407704s • [SLOW TEST:16.870 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 22 13:14:49.401: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-1cb7174d-1d3c-4d15-a142-6dbfe86bb0d6 STEP: Creating a pod to test consume configMaps Feb 22 13:14:49.514: INFO: Waiting up to 5m0s for pod "pod-configmaps-369855c6-8097-417c-85b5-1b5bb77f237b" in namespace "configmap-7850" to be "success or failure" Feb 22 13:14:49.602: INFO: Pod "pod-configmaps-369855c6-8097-417c-85b5-1b5bb77f237b": Phase="Pending", Reason="", readiness=false. Elapsed: 87.832617ms Feb 22 13:14:51.611: INFO: Pod "pod-configmaps-369855c6-8097-417c-85b5-1b5bb77f237b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.097594684s Feb 22 13:14:53.627: INFO: Pod "pod-configmaps-369855c6-8097-417c-85b5-1b5bb77f237b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.113192907s Feb 22 13:14:55.646: INFO: Pod "pod-configmaps-369855c6-8097-417c-85b5-1b5bb77f237b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.132036685s Feb 22 13:14:57.659: INFO: Pod "pod-configmaps-369855c6-8097-417c-85b5-1b5bb77f237b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.144785072s Feb 22 13:14:59.670: INFO: Pod "pod-configmaps-369855c6-8097-417c-85b5-1b5bb77f237b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.155968509s STEP: Saw pod success Feb 22 13:14:59.670: INFO: Pod "pod-configmaps-369855c6-8097-417c-85b5-1b5bb77f237b" satisfied condition "success or failure" Feb 22 13:14:59.676: INFO: Trying to get logs from node iruya-node pod pod-configmaps-369855c6-8097-417c-85b5-1b5bb77f237b container configmap-volume-test: STEP: delete the pod Feb 22 13:14:59.746: INFO: Waiting for pod pod-configmaps-369855c6-8097-417c-85b5-1b5bb77f237b to disappear Feb 22 13:14:59.882: INFO: Pod pod-configmaps-369855c6-8097-417c-85b5-1b5bb77f237b no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 22 13:14:59.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7850" for this suite. Feb 22 13:15:05.933: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 22 13:15:06.045: INFO: namespace configmap-7850 deletion completed in 6.152861393s • [SLOW TEST:16.645 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 22 13:15:06.046: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-a70b83e1-3291-43f3-9422-67c25e522789 STEP: Creating a pod to test consume secrets Feb 22 13:15:06.356: INFO: Waiting up to 5m0s for pod "pod-secrets-33ba1ccc-28a9-48f3-860a-579aad856d79" in namespace "secrets-1073" to be "success or failure" Feb 22 13:15:06.377: INFO: Pod "pod-secrets-33ba1ccc-28a9-48f3-860a-579aad856d79": Phase="Pending", Reason="", readiness=false. Elapsed: 20.069276ms Feb 22 13:15:08.388: INFO: Pod "pod-secrets-33ba1ccc-28a9-48f3-860a-579aad856d79": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031094674s Feb 22 13:15:10.397: INFO: Pod "pod-secrets-33ba1ccc-28a9-48f3-860a-579aad856d79": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040186325s Feb 22 13:15:12.429: INFO: Pod "pod-secrets-33ba1ccc-28a9-48f3-860a-579aad856d79": Phase="Pending", Reason="", readiness=false. Elapsed: 6.072307383s Feb 22 13:15:14.441: INFO: Pod "pod-secrets-33ba1ccc-28a9-48f3-860a-579aad856d79": Phase="Pending", Reason="", readiness=false. Elapsed: 8.084076814s Feb 22 13:15:16.564: INFO: Pod "pod-secrets-33ba1ccc-28a9-48f3-860a-579aad856d79": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.207273202s STEP: Saw pod success Feb 22 13:15:16.564: INFO: Pod "pod-secrets-33ba1ccc-28a9-48f3-860a-579aad856d79" satisfied condition "success or failure" Feb 22 13:15:16.570: INFO: Trying to get logs from node iruya-node pod pod-secrets-33ba1ccc-28a9-48f3-860a-579aad856d79 container secret-volume-test: STEP: delete the pod Feb 22 13:15:16.639: INFO: Waiting for pod pod-secrets-33ba1ccc-28a9-48f3-860a-579aad856d79 to disappear Feb 22 13:15:16.739: INFO: Pod pod-secrets-33ba1ccc-28a9-48f3-860a-579aad856d79 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 22 13:15:16.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1073" for this suite. Feb 22 13:15:22.774: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 22 13:15:22.926: INFO: namespace secrets-1073 deletion completed in 6.179068418s STEP: Destroying namespace "secret-namespace-7325" for this suite. Feb 22 13:15:28.949: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 22 13:15:29.071: INFO: namespace secret-namespace-7325 deletion completed in 6.144335953s • [SLOW TEST:23.025 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 22 13:15:29.071: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1557 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Feb 22 13:15:29.164: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/apps.v1 --namespace=kubectl-4024' Feb 22 13:15:29.285: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Feb 22 13:15:29.285: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the deployment e2e-test-nginx-deployment was created STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created [AfterEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1562 Feb 22 13:15:31.342: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-4024' Feb 22 13:15:31.542: INFO: stderr: "" Feb 22 13:15:31.542: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 22 13:15:31.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4024" for this suite. Feb 22 13:15:37.580: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 22 13:15:37.746: INFO: namespace kubectl-4024 deletion completed in 6.197428808s • [SLOW TEST:8.675 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 22 13:15:37.747: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with configMap that has name projected-configmap-test-upd-bae38815-4f1b-4182-8ca0-4c96bc303b4a STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-bae38815-4f1b-4182-8ca0-4c96bc303b4a STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 22 13:15:50.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2886" for this suite. Feb 22 13:16:12.181: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 22 13:16:12.280: INFO: namespace projected-2886 deletion completed in 22.116919846s • [SLOW TEST:34.533 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 22 13:16:12.280: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5581.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-5581.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5581.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-5581.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Feb 22 13:16:26.719: INFO: File wheezy_udp@dns-test-service-3.dns-5581.svc.cluster.local from pod dns-5581/dns-test-106f8f81-3fec-452b-a53a-d51c4101d09a contains '' instead of 'foo.example.com.' Feb 22 13:16:26.726: INFO: File jessie_udp@dns-test-service-3.dns-5581.svc.cluster.local from pod dns-5581/dns-test-106f8f81-3fec-452b-a53a-d51c4101d09a contains '' instead of 'foo.example.com.' Feb 22 13:16:26.726: INFO: Lookups using dns-5581/dns-test-106f8f81-3fec-452b-a53a-d51c4101d09a failed for: [wheezy_udp@dns-test-service-3.dns-5581.svc.cluster.local jessie_udp@dns-test-service-3.dns-5581.svc.cluster.local] Feb 22 13:16:31.759: INFO: DNS probes using dns-test-106f8f81-3fec-452b-a53a-d51c4101d09a succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5581.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-5581.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5581.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-5581.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Feb 22 13:16:52.483: INFO: File wheezy_udp@dns-test-service-3.dns-5581.svc.cluster.local from pod dns-5581/dns-test-7a9a76a6-5cf9-4a47-abf9-c1322f5f036d contains '' instead of 'bar.example.com.' Feb 22 13:16:52.494: INFO: File jessie_udp@dns-test-service-3.dns-5581.svc.cluster.local from pod dns-5581/dns-test-7a9a76a6-5cf9-4a47-abf9-c1322f5f036d contains '' instead of 'bar.example.com.' Feb 22 13:16:52.494: INFO: Lookups using dns-5581/dns-test-7a9a76a6-5cf9-4a47-abf9-c1322f5f036d failed for: [wheezy_udp@dns-test-service-3.dns-5581.svc.cluster.local jessie_udp@dns-test-service-3.dns-5581.svc.cluster.local] Feb 22 13:16:57.507: INFO: File wheezy_udp@dns-test-service-3.dns-5581.svc.cluster.local from pod dns-5581/dns-test-7a9a76a6-5cf9-4a47-abf9-c1322f5f036d contains 'foo.example.com. ' instead of 'bar.example.com.' Feb 22 13:16:57.518: INFO: File jessie_udp@dns-test-service-3.dns-5581.svc.cluster.local from pod dns-5581/dns-test-7a9a76a6-5cf9-4a47-abf9-c1322f5f036d contains 'foo.example.com. ' instead of 'bar.example.com.' Feb 22 13:16:57.518: INFO: Lookups using dns-5581/dns-test-7a9a76a6-5cf9-4a47-abf9-c1322f5f036d failed for: [wheezy_udp@dns-test-service-3.dns-5581.svc.cluster.local jessie_udp@dns-test-service-3.dns-5581.svc.cluster.local] Feb 22 13:17:02.554: INFO: DNS probes using dns-test-7a9a76a6-5cf9-4a47-abf9-c1322f5f036d succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5581.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-5581.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5581.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-5581.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Feb 22 13:17:21.240: INFO: File wheezy_udp@dns-test-service-3.dns-5581.svc.cluster.local from pod dns-5581/dns-test-645a0d12-74e8-44be-b7d3-0ffa7460473e contains '' instead of '10.102.179.13' Feb 22 13:17:21.247: INFO: File jessie_udp@dns-test-service-3.dns-5581.svc.cluster.local from pod dns-5581/dns-test-645a0d12-74e8-44be-b7d3-0ffa7460473e contains '' instead of '10.102.179.13' Feb 22 13:17:21.247: INFO: Lookups using dns-5581/dns-test-645a0d12-74e8-44be-b7d3-0ffa7460473e failed for: [wheezy_udp@dns-test-service-3.dns-5581.svc.cluster.local jessie_udp@dns-test-service-3.dns-5581.svc.cluster.local] Feb 22 13:17:26.277: INFO: DNS probes using dns-test-645a0d12-74e8-44be-b7d3-0ffa7460473e succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 22 13:17:26.500: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5581" for this suite. Feb 22 13:17:34.643: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 22 13:17:34.774: INFO: namespace dns-5581 deletion completed in 8.247515399s • [SLOW TEST:82.494 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 22 13:17:34.774: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0222 13:17:44.957490 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Feb 22 13:17:44.957: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 22 13:17:44.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8547" for this suite. Feb 22 13:17:51.434: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 22 13:17:51.651: INFO: namespace gc-8547 deletion completed in 6.689412519s • [SLOW TEST:16.876 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 22 13:17:51.651: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Feb 22 13:17:51.813: INFO: Waiting up to 5m0s for pod "downwardapi-volume-80cabbc1-9ca5-4979-9db4-5173b6d2148b" in namespace "projected-2495" to be "success or failure" Feb 22 13:17:51.832: INFO: Pod "downwardapi-volume-80cabbc1-9ca5-4979-9db4-5173b6d2148b": Phase="Pending", Reason="", readiness=false. Elapsed: 19.344051ms Feb 22 13:17:53.847: INFO: Pod "downwardapi-volume-80cabbc1-9ca5-4979-9db4-5173b6d2148b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034196249s Feb 22 13:17:55.873: INFO: Pod "downwardapi-volume-80cabbc1-9ca5-4979-9db4-5173b6d2148b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.059459805s Feb 22 13:17:57.882: INFO: Pod "downwardapi-volume-80cabbc1-9ca5-4979-9db4-5173b6d2148b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.069132568s Feb 22 13:17:59.901: INFO: Pod "downwardapi-volume-80cabbc1-9ca5-4979-9db4-5173b6d2148b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.088047585s Feb 22 13:18:01.914: INFO: Pod "downwardapi-volume-80cabbc1-9ca5-4979-9db4-5173b6d2148b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.10143459s STEP: Saw pod success Feb 22 13:18:01.915: INFO: Pod "downwardapi-volume-80cabbc1-9ca5-4979-9db4-5173b6d2148b" satisfied condition "success or failure" Feb 22 13:18:01.922: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-80cabbc1-9ca5-4979-9db4-5173b6d2148b container client-container: STEP: delete the pod Feb 22 13:18:02.171: INFO: Waiting for pod downwardapi-volume-80cabbc1-9ca5-4979-9db4-5173b6d2148b to disappear Feb 22 13:18:02.177: INFO: Pod downwardapi-volume-80cabbc1-9ca5-4979-9db4-5173b6d2148b no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 22 13:18:02.177: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2495" for this suite. Feb 22 13:18:08.234: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 22 13:18:08.387: INFO: namespace projected-2495 deletion completed in 6.200904166s • [SLOW TEST:16.736 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 22 13:18:08.388: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name projected-secret-test-3bbee55c-725b-40ba-949f-b394f5e814cc STEP: Creating a pod to test consume secrets Feb 22 13:18:08.515: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-ddaff142-a3a7-4400-85fb-575b479c3522" in namespace "projected-1428" to be "success or failure" Feb 22 13:18:08.542: INFO: Pod "pod-projected-secrets-ddaff142-a3a7-4400-85fb-575b479c3522": Phase="Pending", Reason="", readiness=false. Elapsed: 25.858553ms Feb 22 13:18:10.559: INFO: Pod "pod-projected-secrets-ddaff142-a3a7-4400-85fb-575b479c3522": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043769814s Feb 22 13:18:12.584: INFO: Pod "pod-projected-secrets-ddaff142-a3a7-4400-85fb-575b479c3522": Phase="Pending", Reason="", readiness=false. Elapsed: 4.068523616s Feb 22 13:18:14.596: INFO: Pod "pod-projected-secrets-ddaff142-a3a7-4400-85fb-575b479c3522": Phase="Pending", Reason="", readiness=false. Elapsed: 6.080566124s Feb 22 13:18:16.619: INFO: Pod "pod-projected-secrets-ddaff142-a3a7-4400-85fb-575b479c3522": Phase="Pending", Reason="", readiness=false. Elapsed: 8.103184069s Feb 22 13:18:20.142: INFO: Pod "pod-projected-secrets-ddaff142-a3a7-4400-85fb-575b479c3522": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.626528649s STEP: Saw pod success Feb 22 13:18:20.142: INFO: Pod "pod-projected-secrets-ddaff142-a3a7-4400-85fb-575b479c3522" satisfied condition "success or failure" Feb 22 13:18:20.156: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-ddaff142-a3a7-4400-85fb-575b479c3522 container secret-volume-test: STEP: delete the pod Feb 22 13:18:20.643: INFO: Waiting for pod pod-projected-secrets-ddaff142-a3a7-4400-85fb-575b479c3522 to disappear Feb 22 13:18:20.667: INFO: Pod pod-projected-secrets-ddaff142-a3a7-4400-85fb-575b479c3522 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 22 13:18:20.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1428" for this suite. Feb 22 13:18:26.702: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 22 13:18:26.800: INFO: namespace projected-1428 deletion completed in 6.125517624s • [SLOW TEST:18.413 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 22 13:18:26.800: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Feb 22 13:18:28.301: INFO: Pod name wrapped-volume-race-302ab8e8-6bba-461e-a1ac-90743cd4e62f: Found 0 pods out of 5 Feb 22 13:18:33.325: INFO: Pod name wrapped-volume-race-302ab8e8-6bba-461e-a1ac-90743cd4e62f: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-302ab8e8-6bba-461e-a1ac-90743cd4e62f in namespace emptydir-wrapper-7123, will wait for the garbage collector to delete the pods Feb 22 13:19:05.589: INFO: Deleting ReplicationController wrapped-volume-race-302ab8e8-6bba-461e-a1ac-90743cd4e62f took: 29.755558ms Feb 22 13:19:05.990: INFO: Terminating ReplicationController wrapped-volume-race-302ab8e8-6bba-461e-a1ac-90743cd4e62f pods took: 401.020623ms STEP: Creating RC which spawns configmap-volume pods Feb 22 13:19:56.875: INFO: Pod name wrapped-volume-race-60b5a1b1-96d0-46c3-be54-edfa66a701af: Found 0 pods out of 5 Feb 22 13:20:01.895: INFO: Pod name wrapped-volume-race-60b5a1b1-96d0-46c3-be54-edfa66a701af: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-60b5a1b1-96d0-46c3-be54-edfa66a701af in namespace emptydir-wrapper-7123, will wait for the garbage collector to delete the pods Feb 22 13:20:36.014: INFO: Deleting ReplicationController wrapped-volume-race-60b5a1b1-96d0-46c3-be54-edfa66a701af took: 13.516058ms Feb 22 13:20:36.415: INFO: Terminating ReplicationController wrapped-volume-race-60b5a1b1-96d0-46c3-be54-edfa66a701af pods took: 400.98782ms STEP: Creating RC which spawns configmap-volume pods Feb 22 13:21:17.637: INFO: Pod name wrapped-volume-race-eaf5681d-ee2f-401a-9c61-e42cd2eb9ca7: Found 0 pods out of 5 Feb 22 13:21:22.653: INFO: Pod name wrapped-volume-race-eaf5681d-ee2f-401a-9c61-e42cd2eb9ca7: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-eaf5681d-ee2f-401a-9c61-e42cd2eb9ca7 in namespace emptydir-wrapper-7123, will wait for the garbage collector to delete the pods Feb 22 13:22:00.780: INFO: Deleting ReplicationController wrapped-volume-race-eaf5681d-ee2f-401a-9c61-e42cd2eb9ca7 took: 14.575544ms Feb 22 13:22:01.180: INFO: Terminating ReplicationController wrapped-volume-race-eaf5681d-ee2f-401a-9c61-e42cd2eb9ca7 pods took: 400.728549ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 22 13:22:58.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-7123" for this suite. Feb 22 13:23:10.310: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 22 13:23:10.494: INFO: namespace emptydir-wrapper-7123 deletion completed in 12.213890616s • [SLOW TEST:283.694 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 22 13:23:10.495: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0222 13:23:52.039104 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Feb 22 13:23:52.039: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 22 13:23:52.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8461" for this suite. Feb 22 13:24:01.083: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 22 13:24:02.133: INFO: namespace gc-8461 deletion completed in 10.087825664s • [SLOW TEST:51.639 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 22 13:24:02.135: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Feb 22 13:24:02.425: INFO: Waiting up to 5m0s for pod "downward-api-c408dae1-02f9-409c-987b-beaf8d18466a" in namespace "downward-api-3063" to be "success or failure" Feb 22 13:24:02.444: INFO: Pod "downward-api-c408dae1-02f9-409c-987b-beaf8d18466a": Phase="Pending", Reason="", readiness=false. Elapsed: 18.540204ms Feb 22 13:24:05.503: INFO: Pod "downward-api-c408dae1-02f9-409c-987b-beaf8d18466a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.077138671s Feb 22 13:24:07.515: INFO: Pod "downward-api-c408dae1-02f9-409c-987b-beaf8d18466a": Phase="Pending", Reason="", readiness=false. Elapsed: 5.088926457s Feb 22 13:24:09.552: INFO: Pod "downward-api-c408dae1-02f9-409c-987b-beaf8d18466a": Phase="Pending", Reason="", readiness=false. Elapsed: 7.12651867s Feb 22 13:24:11.570: INFO: Pod "downward-api-c408dae1-02f9-409c-987b-beaf8d18466a": Phase="Pending", Reason="", readiness=false. Elapsed: 9.144614585s Feb 22 13:24:13.581: INFO: Pod "downward-api-c408dae1-02f9-409c-987b-beaf8d18466a": Phase="Pending", Reason="", readiness=false. Elapsed: 11.155123869s Feb 22 13:24:15.591: INFO: Pod "downward-api-c408dae1-02f9-409c-987b-beaf8d18466a": Phase="Pending", Reason="", readiness=false. Elapsed: 13.165717845s Feb 22 13:24:17.602: INFO: Pod "downward-api-c408dae1-02f9-409c-987b-beaf8d18466a": Phase="Pending", Reason="", readiness=false. Elapsed: 15.176385704s Feb 22 13:24:19.620: INFO: Pod "downward-api-c408dae1-02f9-409c-987b-beaf8d18466a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 17.193992397s STEP: Saw pod success Feb 22 13:24:19.620: INFO: Pod "downward-api-c408dae1-02f9-409c-987b-beaf8d18466a" satisfied condition "success or failure" Feb 22 13:24:19.627: INFO: Trying to get logs from node iruya-node pod downward-api-c408dae1-02f9-409c-987b-beaf8d18466a container dapi-container: STEP: delete the pod Feb 22 13:24:19.711: INFO: Waiting for pod downward-api-c408dae1-02f9-409c-987b-beaf8d18466a to disappear Feb 22 13:24:19.728: INFO: Pod downward-api-c408dae1-02f9-409c-987b-beaf8d18466a no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 22 13:24:19.728: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3063" for this suite. Feb 22 13:24:28.022: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 22 13:24:28.325: INFO: namespace downward-api-3063 deletion completed in 8.590404693s • [SLOW TEST:26.190 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 22 13:24:28.325: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-5d27a021-552d-44bf-be3e-ff7c842a122a in namespace container-probe-9017 Feb 22 13:24:38.520: INFO: Started pod busybox-5d27a021-552d-44bf-be3e-ff7c842a122a in namespace container-probe-9017 STEP: checking the pod's current state and verifying that restartCount is present Feb 22 13:24:38.528: INFO: Initial restart count of pod busybox-5d27a021-552d-44bf-be3e-ff7c842a122a is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 22 13:28:38.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9017" for this suite. Feb 22 13:28:44.692: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 22 13:28:44.833: INFO: namespace container-probe-9017 deletion completed in 6.167266657s • [SLOW TEST:256.508 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 22 13:28:44.833: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1721 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Feb 22 13:28:44.954: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=kubectl-4711' Feb 22 13:28:46.952: INFO: stderr: "" Feb 22 13:28:46.953: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod is running STEP: verifying the pod e2e-test-nginx-pod was created Feb 22 13:28:57.005: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=kubectl-4711 -o json' Feb 22 13:28:57.178: INFO: stderr: "" Feb 22 13:28:57.179: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-02-22T13:28:46Z\",\n \"labels\": {\n \"run\": \"e2e-test-nginx-pod\"\n },\n \"name\": \"e2e-test-nginx-pod\",\n \"namespace\": \"kubectl-4711\",\n \"resourceVersion\": \"25325847\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-4711/pods/e2e-test-nginx-pod\",\n \"uid\": \"d00c34a2-e354-46fb-b248-68f9d461542c\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-nginx-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-wk59r\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"iruya-node\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-wk59r\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-wk59r\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-02-22T13:28:47Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-02-22T13:28:53Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-02-22T13:28:53Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-02-22T13:28:46Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"docker://36be710f1069d207cf44b95de140d6ec954282dfafe179e46262986b442ed89d\",\n \"image\": \"nginx:1.14-alpine\",\n \"imageID\": \"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n \"lastState\": {},\n \"name\": \"e2e-test-nginx-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-02-22T13:28:53Z\"\n }\n }\n }\n ],\n \"hostIP\": \"10.96.3.65\",\n \"phase\": \"Running\",\n \"podIP\": \"10.44.0.1\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-02-22T13:28:47Z\"\n }\n}\n" STEP: replace the image in the pod Feb 22 13:28:57.179: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-4711' Feb 22 13:28:57.513: INFO: stderr: "" Feb 22 13:28:57.513: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n" STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29 [AfterEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1726 Feb 22 13:28:57.521: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-4711' Feb 22 13:29:06.271: INFO: stderr: "" Feb 22 13:29:06.271: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 22 13:29:06.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4711" for this suite. Feb 22 13:29:12.389: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 22 13:29:12.538: INFO: namespace kubectl-4711 deletion completed in 6.191586918s • [SLOW TEST:27.704 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 22 13:29:12.539: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-5f51c7d1-b111-4916-8cc5-5fcbf38c767d STEP: Creating configMap with name cm-test-opt-upd-84b2a538-4f1c-44b1-be27-13736134d3c3 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-5f51c7d1-b111-4916-8cc5-5fcbf38c767d STEP: Updating configmap cm-test-opt-upd-84b2a538-4f1c-44b1-be27-13736134d3c3 STEP: Creating configMap with name cm-test-opt-create-0ef9d3d8-9feb-4bbf-8f8b-9ccfe49f39fc STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 22 13:29:27.102: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3329" for this suite. Feb 22 13:29:49.173: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 22 13:29:49.252: INFO: namespace projected-3329 deletion completed in 22.136974682s • [SLOW TEST:36.713 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 22 13:29:49.252: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Feb 22 13:30:11.601: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4367 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 22 13:30:11.601: INFO: >>> kubeConfig: /root/.kube/config I0222 13:30:11.739561 8 log.go:172] (0xc001fb6210) (0xc001a08aa0) Create stream I0222 13:30:11.739841 8 log.go:172] (0xc001fb6210) (0xc001a08aa0) Stream added, broadcasting: 1 I0222 13:30:11.756118 8 log.go:172] (0xc001fb6210) Reply frame received for 1 I0222 13:30:11.756188 8 log.go:172] (0xc001fb6210) (0xc00156adc0) Create stream I0222 13:30:11.756201 8 log.go:172] (0xc001fb6210) (0xc00156adc0) Stream added, broadcasting: 3 I0222 13:30:11.759078 8 log.go:172] (0xc001fb6210) Reply frame received for 3 I0222 13:30:11.759151 8 log.go:172] (0xc001fb6210) (0xc0005606e0) Create stream I0222 13:30:11.759162 8 log.go:172] (0xc001fb6210) (0xc0005606e0) Stream added, broadcasting: 5 I0222 13:30:11.768831 8 log.go:172] (0xc001fb6210) Reply frame received for 5 I0222 13:30:11.928027 8 log.go:172] (0xc001fb6210) Data frame received for 3 I0222 13:30:11.928130 8 log.go:172] (0xc00156adc0) (3) Data frame handling I0222 13:30:11.928164 8 log.go:172] (0xc00156adc0) (3) Data frame sent I0222 13:30:12.130898 8 log.go:172] (0xc001fb6210) (0xc00156adc0) Stream removed, broadcasting: 3 I0222 13:30:12.131084 8 log.go:172] (0xc001fb6210) Data frame received for 1 I0222 13:30:12.131115 8 log.go:172] (0xc001a08aa0) (1) Data frame handling I0222 13:30:12.131139 8 log.go:172] (0xc001a08aa0) (1) Data frame sent I0222 13:30:12.131154 8 log.go:172] (0xc001fb6210) (0xc001a08aa0) Stream removed, broadcasting: 1 I0222 13:30:12.131198 8 log.go:172] (0xc001fb6210) (0xc0005606e0) Stream removed, broadcasting: 5 I0222 13:30:12.131345 8 log.go:172] (0xc001fb6210) Go away received I0222 13:30:12.131541 8 log.go:172] (0xc001fb6210) (0xc001a08aa0) Stream removed, broadcasting: 1 I0222 13:30:12.131553 8 log.go:172] (0xc001fb6210) (0xc00156adc0) Stream removed, broadcasting: 3 I0222 13:30:12.131562 8 log.go:172] (0xc001fb6210) (0xc0005606e0) Stream removed, broadcasting: 5 Feb 22 13:30:12.131: INFO: Exec stderr: "" Feb 22 13:30:12.131: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4367 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 22 13:30:12.131: INFO: >>> kubeConfig: /root/.kube/config I0222 13:30:12.203575 8 log.go:172] (0xc001fb6dc0) (0xc001a08dc0) Create stream I0222 13:30:12.203860 8 log.go:172] (0xc001fb6dc0) (0xc001a08dc0) Stream added, broadcasting: 1 I0222 13:30:12.212262 8 log.go:172] (0xc001fb6dc0) Reply frame received for 1 I0222 13:30:12.212436 8 log.go:172] (0xc001fb6dc0) (0xc001b6cfa0) Create stream I0222 13:30:12.212463 8 log.go:172] (0xc001fb6dc0) (0xc001b6cfa0) Stream added, broadcasting: 3 I0222 13:30:12.214144 8 log.go:172] (0xc001fb6dc0) Reply frame received for 3 I0222 13:30:12.214190 8 log.go:172] (0xc001fb6dc0) (0xc00156ae60) Create stream I0222 13:30:12.214198 8 log.go:172] (0xc001fb6dc0) (0xc00156ae60) Stream added, broadcasting: 5 I0222 13:30:12.215662 8 log.go:172] (0xc001fb6dc0) Reply frame received for 5 I0222 13:30:12.307412 8 log.go:172] (0xc001fb6dc0) Data frame received for 3 I0222 13:30:12.307564 8 log.go:172] (0xc001b6cfa0) (3) Data frame handling I0222 13:30:12.307622 8 log.go:172] (0xc001b6cfa0) (3) Data frame sent I0222 13:30:12.446624 8 log.go:172] (0xc001fb6dc0) Data frame received for 1 I0222 13:30:12.447023 8 log.go:172] (0xc001fb6dc0) (0xc001b6cfa0) Stream removed, broadcasting: 3 I0222 13:30:12.447205 8 log.go:172] (0xc001a08dc0) (1) Data frame handling I0222 13:30:12.447328 8 log.go:172] (0xc001a08dc0) (1) Data frame sent I0222 13:30:12.447652 8 log.go:172] (0xc001fb6dc0) (0xc00156ae60) Stream removed, broadcasting: 5 I0222 13:30:12.447960 8 log.go:172] (0xc001fb6dc0) (0xc001a08dc0) Stream removed, broadcasting: 1 I0222 13:30:12.448055 8 log.go:172] (0xc001fb6dc0) Go away received I0222 13:30:12.448756 8 log.go:172] (0xc001fb6dc0) (0xc001a08dc0) Stream removed, broadcasting: 1 I0222 13:30:12.448848 8 log.go:172] (0xc001fb6dc0) (0xc001b6cfa0) Stream removed, broadcasting: 3 I0222 13:30:12.448878 8 log.go:172] (0xc001fb6dc0) (0xc00156ae60) Stream removed, broadcasting: 5 Feb 22 13:30:12.448: INFO: Exec stderr: "" Feb 22 13:30:12.449: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4367 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 22 13:30:12.449: INFO: >>> kubeConfig: /root/.kube/config I0222 13:30:12.542302 8 log.go:172] (0xc0004456b0) (0xc001b6d2c0) Create stream I0222 13:30:12.542484 8 log.go:172] (0xc0004456b0) (0xc001b6d2c0) Stream added, broadcasting: 1 I0222 13:30:12.559945 8 log.go:172] (0xc0004456b0) Reply frame received for 1 I0222 13:30:12.560309 8 log.go:172] (0xc0004456b0) (0xc001b6d400) Create stream I0222 13:30:12.560424 8 log.go:172] (0xc0004456b0) (0xc001b6d400) Stream added, broadcasting: 3 I0222 13:30:12.575531 8 log.go:172] (0xc0004456b0) Reply frame received for 3 I0222 13:30:12.575743 8 log.go:172] (0xc0004456b0) (0xc001a08e60) Create stream I0222 13:30:12.575860 8 log.go:172] (0xc0004456b0) (0xc001a08e60) Stream added, broadcasting: 5 I0222 13:30:12.602697 8 log.go:172] (0xc0004456b0) Reply frame received for 5 I0222 13:30:13.015823 8 log.go:172] (0xc0004456b0) Data frame received for 3 I0222 13:30:13.016042 8 log.go:172] (0xc001b6d400) (3) Data frame handling I0222 13:30:13.016081 8 log.go:172] (0xc001b6d400) (3) Data frame sent I0222 13:30:13.161795 8 log.go:172] (0xc0004456b0) Data frame received for 1 I0222 13:30:13.161879 8 log.go:172] (0xc0004456b0) (0xc001b6d400) Stream removed, broadcasting: 3 I0222 13:30:13.161919 8 log.go:172] (0xc001b6d2c0) (1) Data frame handling I0222 13:30:13.161934 8 log.go:172] (0xc001b6d2c0) (1) Data frame sent I0222 13:30:13.161952 8 log.go:172] (0xc0004456b0) (0xc001a08e60) Stream removed, broadcasting: 5 I0222 13:30:13.161976 8 log.go:172] (0xc0004456b0) (0xc001b6d2c0) Stream removed, broadcasting: 1 I0222 13:30:13.161989 8 log.go:172] (0xc0004456b0) Go away received I0222 13:30:13.162251 8 log.go:172] (0xc0004456b0) (0xc001b6d2c0) Stream removed, broadcasting: 1 I0222 13:30:13.162279 8 log.go:172] (0xc0004456b0) (0xc001b6d400) Stream removed, broadcasting: 3 I0222 13:30:13.162295 8 log.go:172] (0xc0004456b0) (0xc001a08e60) Stream removed, broadcasting: 5 Feb 22 13:30:13.162: INFO: Exec stderr: "" Feb 22 13:30:13.162: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4367 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 22 13:30:13.162: INFO: >>> kubeConfig: /root/.kube/config I0222 13:30:13.212945 8 log.go:172] (0xc000ab9c30) (0xc000561180) Create stream I0222 13:30:13.213072 8 log.go:172] (0xc000ab9c30) (0xc000561180) Stream added, broadcasting: 1 I0222 13:30:13.220893 8 log.go:172] (0xc000ab9c30) Reply frame received for 1 I0222 13:30:13.220984 8 log.go:172] (0xc000ab9c30) (0xc00111c5a0) Create stream I0222 13:30:13.220994 8 log.go:172] (0xc000ab9c30) (0xc00111c5a0) Stream added, broadcasting: 3 I0222 13:30:13.222310 8 log.go:172] (0xc000ab9c30) Reply frame received for 3 I0222 13:30:13.222329 8 log.go:172] (0xc000ab9c30) (0xc0005612c0) Create stream I0222 13:30:13.222356 8 log.go:172] (0xc000ab9c30) (0xc0005612c0) Stream added, broadcasting: 5 I0222 13:30:13.224470 8 log.go:172] (0xc000ab9c30) Reply frame received for 5 I0222 13:30:13.320740 8 log.go:172] (0xc000ab9c30) Data frame received for 3 I0222 13:30:13.320791 8 log.go:172] (0xc00111c5a0) (3) Data frame handling I0222 13:30:13.320802 8 log.go:172] (0xc00111c5a0) (3) Data frame sent I0222 13:30:13.432416 8 log.go:172] (0xc000ab9c30) (0xc00111c5a0) Stream removed, broadcasting: 3 I0222 13:30:13.432546 8 log.go:172] (0xc000ab9c30) Data frame received for 1 I0222 13:30:13.432571 8 log.go:172] (0xc000561180) (1) Data frame handling I0222 13:30:13.432591 8 log.go:172] (0xc000561180) (1) Data frame sent I0222 13:30:13.432632 8 log.go:172] (0xc000ab9c30) (0xc000561180) Stream removed, broadcasting: 1 I0222 13:30:13.432690 8 log.go:172] (0xc000ab9c30) (0xc0005612c0) Stream removed, broadcasting: 5 I0222 13:30:13.432777 8 log.go:172] (0xc000ab9c30) Go away received I0222 13:30:13.433118 8 log.go:172] (0xc000ab9c30) (0xc000561180) Stream removed, broadcasting: 1 I0222 13:30:13.433133 8 log.go:172] (0xc000ab9c30) (0xc00111c5a0) Stream removed, broadcasting: 3 I0222 13:30:13.433145 8 log.go:172] (0xc000ab9c30) (0xc0005612c0) Stream removed, broadcasting: 5 Feb 22 13:30:13.433: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Feb 22 13:30:13.433: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4367 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 22 13:30:13.433: INFO: >>> kubeConfig: /root/.kube/config I0222 13:30:13.506022 8 log.go:172] (0xc000d16bb0) (0xc00111cbe0) Create stream I0222 13:30:13.506218 8 log.go:172] (0xc000d16bb0) (0xc00111cbe0) Stream added, broadcasting: 1 I0222 13:30:13.510531 8 log.go:172] (0xc000d16bb0) Reply frame received for 1 I0222 13:30:13.510610 8 log.go:172] (0xc000d16bb0) (0xc0032f14a0) Create stream I0222 13:30:13.510623 8 log.go:172] (0xc000d16bb0) (0xc0032f14a0) Stream added, broadcasting: 3 I0222 13:30:13.512101 8 log.go:172] (0xc000d16bb0) Reply frame received for 3 I0222 13:30:13.512138 8 log.go:172] (0xc000d16bb0) (0xc001b6d4a0) Create stream I0222 13:30:13.512148 8 log.go:172] (0xc000d16bb0) (0xc001b6d4a0) Stream added, broadcasting: 5 I0222 13:30:13.512981 8 log.go:172] (0xc000d16bb0) Reply frame received for 5 I0222 13:30:13.597530 8 log.go:172] (0xc000d16bb0) Data frame received for 3 I0222 13:30:13.597673 8 log.go:172] (0xc0032f14a0) (3) Data frame handling I0222 13:30:13.597726 8 log.go:172] (0xc0032f14a0) (3) Data frame sent I0222 13:30:13.753401 8 log.go:172] (0xc000d16bb0) (0xc001b6d4a0) Stream removed, broadcasting: 5 I0222 13:30:13.753584 8 log.go:172] (0xc000d16bb0) Data frame received for 1 I0222 13:30:13.753624 8 log.go:172] (0xc000d16bb0) (0xc0032f14a0) Stream removed, broadcasting: 3 I0222 13:30:13.753699 8 log.go:172] (0xc00111cbe0) (1) Data frame handling I0222 13:30:13.753727 8 log.go:172] (0xc00111cbe0) (1) Data frame sent I0222 13:30:13.753767 8 log.go:172] (0xc000d16bb0) (0xc00111cbe0) Stream removed, broadcasting: 1 I0222 13:30:13.753847 8 log.go:172] (0xc000d16bb0) Go away received I0222 13:30:13.754080 8 log.go:172] (0xc000d16bb0) (0xc00111cbe0) Stream removed, broadcasting: 1 I0222 13:30:13.754096 8 log.go:172] (0xc000d16bb0) (0xc0032f14a0) Stream removed, broadcasting: 3 I0222 13:30:13.754116 8 log.go:172] (0xc000d16bb0) (0xc001b6d4a0) Stream removed, broadcasting: 5 Feb 22 13:30:13.754: INFO: Exec stderr: "" Feb 22 13:30:13.754: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4367 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 22 13:30:13.754: INFO: >>> kubeConfig: /root/.kube/config I0222 13:30:13.840757 8 log.go:172] (0xc000d174a0) (0xc00111d040) Create stream I0222 13:30:13.840854 8 log.go:172] (0xc000d174a0) (0xc00111d040) Stream added, broadcasting: 1 I0222 13:30:13.847717 8 log.go:172] (0xc000d174a0) Reply frame received for 1 I0222 13:30:13.847835 8 log.go:172] (0xc000d174a0) (0xc0032f1540) Create stream I0222 13:30:13.847882 8 log.go:172] (0xc000d174a0) (0xc0032f1540) Stream added, broadcasting: 3 I0222 13:30:13.849975 8 log.go:172] (0xc000d174a0) Reply frame received for 3 I0222 13:30:13.850024 8 log.go:172] (0xc000d174a0) (0xc0032f15e0) Create stream I0222 13:30:13.850034 8 log.go:172] (0xc000d174a0) (0xc0032f15e0) Stream added, broadcasting: 5 I0222 13:30:13.851589 8 log.go:172] (0xc000d174a0) Reply frame received for 5 I0222 13:30:14.035668 8 log.go:172] (0xc000d174a0) Data frame received for 3 I0222 13:30:14.035782 8 log.go:172] (0xc0032f1540) (3) Data frame handling I0222 13:30:14.035815 8 log.go:172] (0xc0032f1540) (3) Data frame sent I0222 13:30:14.206241 8 log.go:172] (0xc000d174a0) Data frame received for 1 I0222 13:30:14.206290 8 log.go:172] (0xc000d174a0) (0xc0032f15e0) Stream removed, broadcasting: 5 I0222 13:30:14.206328 8 log.go:172] (0xc00111d040) (1) Data frame handling I0222 13:30:14.206346 8 log.go:172] (0xc00111d040) (1) Data frame sent I0222 13:30:14.206366 8 log.go:172] (0xc000d174a0) (0xc0032f1540) Stream removed, broadcasting: 3 I0222 13:30:14.206384 8 log.go:172] (0xc000d174a0) (0xc00111d040) Stream removed, broadcasting: 1 I0222 13:30:14.206396 8 log.go:172] (0xc000d174a0) Go away received I0222 13:30:14.206623 8 log.go:172] (0xc000d174a0) (0xc00111d040) Stream removed, broadcasting: 1 I0222 13:30:14.206637 8 log.go:172] (0xc000d174a0) (0xc0032f1540) Stream removed, broadcasting: 3 I0222 13:30:14.206663 8 log.go:172] (0xc000d174a0) (0xc0032f15e0) Stream removed, broadcasting: 5 Feb 22 13:30:14.206: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Feb 22 13:30:14.206: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4367 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 22 13:30:14.206: INFO: >>> kubeConfig: /root/.kube/config I0222 13:30:14.253485 8 log.go:172] (0xc00250c9a0) (0xc00156b220) Create stream I0222 13:30:14.253681 8 log.go:172] (0xc00250c9a0) (0xc00156b220) Stream added, broadcasting: 1 I0222 13:30:14.259370 8 log.go:172] (0xc00250c9a0) Reply frame received for 1 I0222 13:30:14.259429 8 log.go:172] (0xc00250c9a0) (0xc0032f1680) Create stream I0222 13:30:14.259438 8 log.go:172] (0xc00250c9a0) (0xc0032f1680) Stream added, broadcasting: 3 I0222 13:30:14.261248 8 log.go:172] (0xc00250c9a0) Reply frame received for 3 I0222 13:30:14.261373 8 log.go:172] (0xc00250c9a0) (0xc00156b2c0) Create stream I0222 13:30:14.261387 8 log.go:172] (0xc00250c9a0) (0xc00156b2c0) Stream added, broadcasting: 5 I0222 13:30:14.262275 8 log.go:172] (0xc00250c9a0) Reply frame received for 5 I0222 13:30:14.327630 8 log.go:172] (0xc00250c9a0) Data frame received for 3 I0222 13:30:14.327785 8 log.go:172] (0xc0032f1680) (3) Data frame handling I0222 13:30:14.327835 8 log.go:172] (0xc0032f1680) (3) Data frame sent I0222 13:30:14.437039 8 log.go:172] (0xc00250c9a0) (0xc0032f1680) Stream removed, broadcasting: 3 I0222 13:30:14.437290 8 log.go:172] (0xc00250c9a0) Data frame received for 1 I0222 13:30:14.437311 8 log.go:172] (0xc00156b220) (1) Data frame handling I0222 13:30:14.437336 8 log.go:172] (0xc00156b220) (1) Data frame sent I0222 13:30:14.437348 8 log.go:172] (0xc00250c9a0) (0xc00156b220) Stream removed, broadcasting: 1 I0222 13:30:14.439433 8 log.go:172] (0xc00250c9a0) (0xc00156b2c0) Stream removed, broadcasting: 5 I0222 13:30:14.439565 8 log.go:172] (0xc00250c9a0) Go away received I0222 13:30:14.440287 8 log.go:172] (0xc00250c9a0) (0xc00156b220) Stream removed, broadcasting: 1 I0222 13:30:14.440361 8 log.go:172] (0xc00250c9a0) (0xc0032f1680) Stream removed, broadcasting: 3 I0222 13:30:14.440393 8 log.go:172] (0xc00250c9a0) (0xc00156b2c0) Stream removed, broadcasting: 5 Feb 22 13:30:14.440: INFO: Exec stderr: "" Feb 22 13:30:14.440: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4367 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 22 13:30:14.440: INFO: >>> kubeConfig: /root/.kube/config I0222 13:30:14.506624 8 log.go:172] (0xc00250d760) (0xc00156b4a0) Create stream I0222 13:30:14.506829 8 log.go:172] (0xc00250d760) (0xc00156b4a0) Stream added, broadcasting: 1 I0222 13:30:14.518254 8 log.go:172] (0xc00250d760) Reply frame received for 1 I0222 13:30:14.518442 8 log.go:172] (0xc00250d760) (0xc001b6d540) Create stream I0222 13:30:14.518477 8 log.go:172] (0xc00250d760) (0xc001b6d540) Stream added, broadcasting: 3 I0222 13:30:14.520882 8 log.go:172] (0xc00250d760) Reply frame received for 3 I0222 13:30:14.520939 8 log.go:172] (0xc00250d760) (0xc00111d180) Create stream I0222 13:30:14.520983 8 log.go:172] (0xc00250d760) (0xc00111d180) Stream added, broadcasting: 5 I0222 13:30:14.522911 8 log.go:172] (0xc00250d760) Reply frame received for 5 I0222 13:30:14.669919 8 log.go:172] (0xc00250d760) Data frame received for 3 I0222 13:30:14.670073 8 log.go:172] (0xc001b6d540) (3) Data frame handling I0222 13:30:14.670103 8 log.go:172] (0xc001b6d540) (3) Data frame sent I0222 13:30:14.759793 8 log.go:172] (0xc00250d760) (0xc001b6d540) Stream removed, broadcasting: 3 I0222 13:30:14.759976 8 log.go:172] (0xc00250d760) Data frame received for 1 I0222 13:30:14.760007 8 log.go:172] (0xc00156b4a0) (1) Data frame handling I0222 13:30:14.760045 8 log.go:172] (0xc00250d760) (0xc00111d180) Stream removed, broadcasting: 5 I0222 13:30:14.760105 8 log.go:172] (0xc00156b4a0) (1) Data frame sent I0222 13:30:14.760128 8 log.go:172] (0xc00250d760) (0xc00156b4a0) Stream removed, broadcasting: 1 I0222 13:30:14.760156 8 log.go:172] (0xc00250d760) Go away received I0222 13:30:14.760601 8 log.go:172] (0xc00250d760) (0xc00156b4a0) Stream removed, broadcasting: 1 I0222 13:30:14.760629 8 log.go:172] (0xc00250d760) (0xc001b6d540) Stream removed, broadcasting: 3 I0222 13:30:14.760647 8 log.go:172] (0xc00250d760) (0xc00111d180) Stream removed, broadcasting: 5 Feb 22 13:30:14.760: INFO: Exec stderr: "" Feb 22 13:30:14.760: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4367 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 22 13:30:14.760: INFO: >>> kubeConfig: /root/.kube/config I0222 13:30:14.829959 8 log.go:172] (0xc002700840) (0xc0005619a0) Create stream I0222 13:30:14.830041 8 log.go:172] (0xc002700840) (0xc0005619a0) Stream added, broadcasting: 1 I0222 13:30:14.836110 8 log.go:172] (0xc002700840) Reply frame received for 1 I0222 13:30:14.836146 8 log.go:172] (0xc002700840) (0xc001a08f00) Create stream I0222 13:30:14.836160 8 log.go:172] (0xc002700840) (0xc001a08f00) Stream added, broadcasting: 3 I0222 13:30:14.839763 8 log.go:172] (0xc002700840) Reply frame received for 3 I0222 13:30:14.839791 8 log.go:172] (0xc002700840) (0xc001a08fa0) Create stream I0222 13:30:14.839802 8 log.go:172] (0xc002700840) (0xc001a08fa0) Stream added, broadcasting: 5 I0222 13:30:14.841264 8 log.go:172] (0xc002700840) Reply frame received for 5 I0222 13:30:14.942370 8 log.go:172] (0xc002700840) Data frame received for 3 I0222 13:30:14.942461 8 log.go:172] (0xc001a08f00) (3) Data frame handling I0222 13:30:14.942487 8 log.go:172] (0xc001a08f00) (3) Data frame sent I0222 13:30:15.046541 8 log.go:172] (0xc002700840) (0xc001a08fa0) Stream removed, broadcasting: 5 I0222 13:30:15.046787 8 log.go:172] (0xc002700840) Data frame received for 1 I0222 13:30:15.046824 8 log.go:172] (0xc002700840) (0xc001a08f00) Stream removed, broadcasting: 3 I0222 13:30:15.046902 8 log.go:172] (0xc0005619a0) (1) Data frame handling I0222 13:30:15.046952 8 log.go:172] (0xc0005619a0) (1) Data frame sent I0222 13:30:15.046965 8 log.go:172] (0xc002700840) (0xc0005619a0) Stream removed, broadcasting: 1 I0222 13:30:15.046986 8 log.go:172] (0xc002700840) Go away received I0222 13:30:15.047269 8 log.go:172] (0xc002700840) (0xc0005619a0) Stream removed, broadcasting: 1 I0222 13:30:15.047302 8 log.go:172] (0xc002700840) (0xc001a08f00) Stream removed, broadcasting: 3 I0222 13:30:15.047315 8 log.go:172] (0xc002700840) (0xc001a08fa0) Stream removed, broadcasting: 5 Feb 22 13:30:15.047: INFO: Exec stderr: "" Feb 22 13:30:15.047: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4367 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 22 13:30:15.047: INFO: >>> kubeConfig: /root/.kube/config I0222 13:30:15.101063 8 log.go:172] (0xc002896630) (0xc00111d540) Create stream I0222 13:30:15.101226 8 log.go:172] (0xc002896630) (0xc00111d540) Stream added, broadcasting: 1 I0222 13:30:15.105886 8 log.go:172] (0xc002896630) Reply frame received for 1 I0222 13:30:15.105925 8 log.go:172] (0xc002896630) (0xc00156b5e0) Create stream I0222 13:30:15.105933 8 log.go:172] (0xc002896630) (0xc00156b5e0) Stream added, broadcasting: 3 I0222 13:30:15.107855 8 log.go:172] (0xc002896630) Reply frame received for 3 I0222 13:30:15.107922 8 log.go:172] (0xc002896630) (0xc000561a40) Create stream I0222 13:30:15.107932 8 log.go:172] (0xc002896630) (0xc000561a40) Stream added, broadcasting: 5 I0222 13:30:15.109293 8 log.go:172] (0xc002896630) Reply frame received for 5 I0222 13:30:15.201880 8 log.go:172] (0xc002896630) Data frame received for 3 I0222 13:30:15.201992 8 log.go:172] (0xc00156b5e0) (3) Data frame handling I0222 13:30:15.202025 8 log.go:172] (0xc00156b5e0) (3) Data frame sent I0222 13:30:15.306938 8 log.go:172] (0xc002896630) Data frame received for 1 I0222 13:30:15.307056 8 log.go:172] (0xc002896630) (0xc000561a40) Stream removed, broadcasting: 5 I0222 13:30:15.307213 8 log.go:172] (0xc00111d540) (1) Data frame handling I0222 13:30:15.307242 8 log.go:172] (0xc00111d540) (1) Data frame sent I0222 13:30:15.307587 8 log.go:172] (0xc002896630) (0xc00156b5e0) Stream removed, broadcasting: 3 I0222 13:30:15.307880 8 log.go:172] (0xc002896630) (0xc00111d540) Stream removed, broadcasting: 1 I0222 13:30:15.307970 8 log.go:172] (0xc002896630) Go away received I0222 13:30:15.308601 8 log.go:172] (0xc002896630) (0xc00111d540) Stream removed, broadcasting: 1 I0222 13:30:15.308630 8 log.go:172] (0xc002896630) (0xc00156b5e0) Stream removed, broadcasting: 3 I0222 13:30:15.308641 8 log.go:172] (0xc002896630) (0xc000561a40) Stream removed, broadcasting: 5 Feb 22 13:30:15.308: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 22 13:30:15.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-4367" for this suite. Feb 22 13:31:09.399: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 22 13:31:09.525: INFO: namespace e2e-kubelet-etc-hosts-4367 deletion completed in 54.205447432s • [SLOW TEST:80.273 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 22 13:31:09.526: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 22 13:32:01.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5454" for this suite. Feb 22 13:32:07.181: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 22 13:32:07.346: INFO: namespace container-runtime-5454 deletion completed in 6.18815935s • [SLOW TEST:57.819 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 22 13:32:07.347: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Feb 22 13:32:07.433: INFO: Waiting up to 5m0s for pod "downwardapi-volume-24604003-0147-4b89-8795-a6d1650f59a9" in namespace "projected-7641" to be "success or failure" Feb 22 13:32:07.501: INFO: Pod "downwardapi-volume-24604003-0147-4b89-8795-a6d1650f59a9": Phase="Pending", Reason="", readiness=false. Elapsed: 67.17127ms Feb 22 13:32:09.512: INFO: Pod "downwardapi-volume-24604003-0147-4b89-8795-a6d1650f59a9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.078675002s Feb 22 13:32:11.527: INFO: Pod "downwardapi-volume-24604003-0147-4b89-8795-a6d1650f59a9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.093725574s Feb 22 13:32:13.539: INFO: Pod "downwardapi-volume-24604003-0147-4b89-8795-a6d1650f59a9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.105365549s Feb 22 13:32:15.562: INFO: Pod "downwardapi-volume-24604003-0147-4b89-8795-a6d1650f59a9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.127802735s Feb 22 13:32:17.570: INFO: Pod "downwardapi-volume-24604003-0147-4b89-8795-a6d1650f59a9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.135880399s STEP: Saw pod success Feb 22 13:32:17.570: INFO: Pod "downwardapi-volume-24604003-0147-4b89-8795-a6d1650f59a9" satisfied condition "success or failure" Feb 22 13:32:17.576: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-24604003-0147-4b89-8795-a6d1650f59a9 container client-container: STEP: delete the pod Feb 22 13:32:17.733: INFO: Waiting for pod downwardapi-volume-24604003-0147-4b89-8795-a6d1650f59a9 to disappear Feb 22 13:32:17.835: INFO: Pod downwardapi-volume-24604003-0147-4b89-8795-a6d1650f59a9 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 22 13:32:17.835: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7641" for this suite. Feb 22 13:32:23.920: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 22 13:32:24.018: INFO: namespace projected-7641 deletion completed in 6.166691747s • [SLOW TEST:16.672 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 22 13:32:24.020: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Feb 22 13:32:24.211: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"beb88c4d-0f24-4400-bb6c-643118318b59", Controller:(*bool)(0xc001f6cb1a), BlockOwnerDeletion:(*bool)(0xc001f6cb1b)}} Feb 22 13:32:24.225: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"7c332aeb-7517-4b93-9056-e15fa8e46101", Controller:(*bool)(0xc001f6ccda), BlockOwnerDeletion:(*bool)(0xc001f6ccdb)}} Feb 22 13:32:24.236: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"b773a831-6ab8-426b-a814-5a945f6aafc0", Controller:(*bool)(0xc001f6ce9a), BlockOwnerDeletion:(*bool)(0xc001f6ce9b)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 22 13:32:29.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4992" for this suite. Feb 22 13:32:35.459: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 22 13:32:35.589: INFO: namespace gc-4992 deletion completed in 6.193154151s • [SLOW TEST:11.569 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 22 13:32:35.590: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Feb 22 13:32:35.705: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 22 13:32:56.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-8853" for this suite. Feb 22 13:33:20.138: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 22 13:33:20.282: INFO: namespace init-container-8853 deletion completed in 24.169449625s • [SLOW TEST:44.692 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 22 13:33:20.283: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-9971 STEP: creating a selector STEP: Creating the service pods in kubernetes Feb 22 13:33:20.349: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Feb 22 13:33:56.576: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:pod-network-test-9971 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 22 13:33:56.576: INFO: >>> kubeConfig: /root/.kube/config I0222 13:33:56.698196 8 log.go:172] (0xc000dd53f0) (0xc0009da000) Create stream I0222 13:33:56.698448 8 log.go:172] (0xc000dd53f0) (0xc0009da000) Stream added, broadcasting: 1 I0222 13:33:56.726194 8 log.go:172] (0xc000dd53f0) Reply frame received for 1 I0222 13:33:56.726384 8 log.go:172] (0xc000dd53f0) (0xc0009da140) Create stream I0222 13:33:56.726402 8 log.go:172] (0xc000dd53f0) (0xc0009da140) Stream added, broadcasting: 3 I0222 13:33:56.730112 8 log.go:172] (0xc000dd53f0) Reply frame received for 3 I0222 13:33:56.730171 8 log.go:172] (0xc000dd53f0) (0xc002024000) Create stream I0222 13:33:56.730179 8 log.go:172] (0xc000dd53f0) (0xc002024000) Stream added, broadcasting: 5 I0222 13:33:56.737995 8 log.go:172] (0xc000dd53f0) Reply frame received for 5 I0222 13:33:56.983943 8 log.go:172] (0xc000dd53f0) Data frame received for 3 I0222 13:33:56.984081 8 log.go:172] (0xc0009da140) (3) Data frame handling I0222 13:33:56.984132 8 log.go:172] (0xc0009da140) (3) Data frame sent I0222 13:33:57.114178 8 log.go:172] (0xc000dd53f0) (0xc0009da140) Stream removed, broadcasting: 3 I0222 13:33:57.114300 8 log.go:172] (0xc000dd53f0) Data frame received for 1 I0222 13:33:57.114348 8 log.go:172] (0xc000dd53f0) (0xc002024000) Stream removed, broadcasting: 5 I0222 13:33:57.114397 8 log.go:172] (0xc0009da000) (1) Data frame handling I0222 13:33:57.114425 8 log.go:172] (0xc0009da000) (1) Data frame sent I0222 13:33:57.114460 8 log.go:172] (0xc000dd53f0) (0xc0009da000) Stream removed, broadcasting: 1 I0222 13:33:57.114494 8 log.go:172] (0xc000dd53f0) Go away received I0222 13:33:57.114834 8 log.go:172] (0xc000dd53f0) (0xc0009da000) Stream removed, broadcasting: 1 I0222 13:33:57.114855 8 log.go:172] (0xc000dd53f0) (0xc0009da140) Stream removed, broadcasting: 3 I0222 13:33:57.114876 8 log.go:172] (0xc000dd53f0) (0xc002024000) Stream removed, broadcasting: 5 Feb 22 13:33:57.115: INFO: Waiting for endpoints: map[] Feb 22 13:33:57.123: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=http&host=10.44.0.1&port=8080&tries=1'] Namespace:pod-network-test-9971 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 22 13:33:57.123: INFO: >>> kubeConfig: /root/.kube/config I0222 13:33:57.203852 8 log.go:172] (0xc0004458c0) (0xc0011d43c0) Create stream I0222 13:33:57.204034 8 log.go:172] (0xc0004458c0) (0xc0011d43c0) Stream added, broadcasting: 1 I0222 13:33:57.211005 8 log.go:172] (0xc0004458c0) Reply frame received for 1 I0222 13:33:57.211058 8 log.go:172] (0xc0004458c0) (0xc002024140) Create stream I0222 13:33:57.211067 8 log.go:172] (0xc0004458c0) (0xc002024140) Stream added, broadcasting: 3 I0222 13:33:57.211906 8 log.go:172] (0xc0004458c0) Reply frame received for 3 I0222 13:33:57.211928 8 log.go:172] (0xc0004458c0) (0xc001baa280) Create stream I0222 13:33:57.211938 8 log.go:172] (0xc0004458c0) (0xc001baa280) Stream added, broadcasting: 5 I0222 13:33:57.212752 8 log.go:172] (0xc0004458c0) Reply frame received for 5 I0222 13:33:57.357155 8 log.go:172] (0xc0004458c0) Data frame received for 3 I0222 13:33:57.357315 8 log.go:172] (0xc002024140) (3) Data frame handling I0222 13:33:57.357341 8 log.go:172] (0xc002024140) (3) Data frame sent I0222 13:33:57.515639 8 log.go:172] (0xc0004458c0) Data frame received for 1 I0222 13:33:57.515751 8 log.go:172] (0xc0004458c0) (0xc002024140) Stream removed, broadcasting: 3 I0222 13:33:57.515851 8 log.go:172] (0xc0011d43c0) (1) Data frame handling I0222 13:33:57.515869 8 log.go:172] (0xc0011d43c0) (1) Data frame sent I0222 13:33:57.515877 8 log.go:172] (0xc0004458c0) (0xc0011d43c0) Stream removed, broadcasting: 1 I0222 13:33:57.516119 8 log.go:172] (0xc0004458c0) (0xc001baa280) Stream removed, broadcasting: 5 I0222 13:33:57.516163 8 log.go:172] (0xc0004458c0) (0xc0011d43c0) Stream removed, broadcasting: 1 I0222 13:33:57.516174 8 log.go:172] (0xc0004458c0) (0xc002024140) Stream removed, broadcasting: 3 I0222 13:33:57.516180 8 log.go:172] (0xc0004458c0) (0xc001baa280) Stream removed, broadcasting: 5 Feb 22 13:33:57.516: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 22 13:33:57.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready I0222 13:33:57.517858 8 log.go:172] (0xc0004458c0) Go away received STEP: Destroying namespace "pod-network-test-9971" for this suite. Feb 22 13:34:21.551: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 22 13:34:21.721: INFO: namespace pod-network-test-9971 deletion completed in 24.19667637s • [SLOW TEST:61.439 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 22 13:34:21.723: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: validating api versions Feb 22 13:34:21.810: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' Feb 22 13:34:22.002: INFO: stderr: "" Feb 22 13:34:22.003: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 22 13:34:22.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7885" for this suite. Feb 22 13:34:28.042: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 22 13:34:28.172: INFO: namespace kubectl-7885 deletion completed in 6.159708881s • [SLOW TEST:6.450 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl api-versions /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 22 13:34:28.173: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-projected-all-test-volume-59cd6dff-30f8-4068-8241-b6b36d96174f STEP: Creating secret with name secret-projected-all-test-volume-ff9c28ec-f01a-44b5-bbf4-e68cedd42312 STEP: Creating a pod to test Check all projections for projected volume plugin Feb 22 13:34:28.424: INFO: Waiting up to 5m0s for pod "projected-volume-1bfd6c2c-d9ec-4dda-9ebd-6bba26f97152" in namespace "projected-6113" to be "success or failure" Feb 22 13:34:28.433: INFO: Pod "projected-volume-1bfd6c2c-d9ec-4dda-9ebd-6bba26f97152": Phase="Pending", Reason="", readiness=false. Elapsed: 8.810665ms Feb 22 13:34:30.442: INFO: Pod "projected-volume-1bfd6c2c-d9ec-4dda-9ebd-6bba26f97152": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01786339s Feb 22 13:34:32.456: INFO: Pod "projected-volume-1bfd6c2c-d9ec-4dda-9ebd-6bba26f97152": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031812289s Feb 22 13:34:34.472: INFO: Pod "projected-volume-1bfd6c2c-d9ec-4dda-9ebd-6bba26f97152": Phase="Pending", Reason="", readiness=false. Elapsed: 6.047556675s Feb 22 13:34:36.488: INFO: Pod "projected-volume-1bfd6c2c-d9ec-4dda-9ebd-6bba26f97152": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.063210113s STEP: Saw pod success Feb 22 13:34:36.488: INFO: Pod "projected-volume-1bfd6c2c-d9ec-4dda-9ebd-6bba26f97152" satisfied condition "success or failure" Feb 22 13:34:36.494: INFO: Trying to get logs from node iruya-node pod projected-volume-1bfd6c2c-d9ec-4dda-9ebd-6bba26f97152 container projected-all-volume-test: STEP: delete the pod Feb 22 13:34:36.586: INFO: Waiting for pod projected-volume-1bfd6c2c-d9ec-4dda-9ebd-6bba26f97152 to disappear Feb 22 13:34:36.594: INFO: Pod projected-volume-1bfd6c2c-d9ec-4dda-9ebd-6bba26f97152 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 22 13:34:36.594: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6113" for this suite. Feb 22 13:34:42.628: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 22 13:34:42.773: INFO: namespace projected-6113 deletion completed in 6.172565049s • [SLOW TEST:14.600 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 22 13:34:42.775: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-ead82908-2f23-45bf-9a7f-674203cce5f3 STEP: Creating a pod to test consume configMaps Feb 22 13:34:43.594: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ed2236e5-a6ae-4c8d-bac5-6a762a27daff" in namespace "projected-8116" to be "success or failure" Feb 22 13:34:43.615: INFO: Pod "pod-projected-configmaps-ed2236e5-a6ae-4c8d-bac5-6a762a27daff": Phase="Pending", Reason="", readiness=false. Elapsed: 20.612562ms Feb 22 13:34:45.628: INFO: Pod "pod-projected-configmaps-ed2236e5-a6ae-4c8d-bac5-6a762a27daff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032875799s Feb 22 13:34:47.641: INFO: Pod "pod-projected-configmaps-ed2236e5-a6ae-4c8d-bac5-6a762a27daff": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045805574s Feb 22 13:34:49.653: INFO: Pod "pod-projected-configmaps-ed2236e5-a6ae-4c8d-bac5-6a762a27daff": Phase="Pending", Reason="", readiness=false. Elapsed: 6.057942501s Feb 22 13:34:51.688: INFO: Pod "pod-projected-configmaps-ed2236e5-a6ae-4c8d-bac5-6a762a27daff": Phase="Pending", Reason="", readiness=false. Elapsed: 8.09303166s Feb 22 13:34:53.715: INFO: Pod "pod-projected-configmaps-ed2236e5-a6ae-4c8d-bac5-6a762a27daff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.120208232s STEP: Saw pod success Feb 22 13:34:53.715: INFO: Pod "pod-projected-configmaps-ed2236e5-a6ae-4c8d-bac5-6a762a27daff" satisfied condition "success or failure" Feb 22 13:34:53.725: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-ed2236e5-a6ae-4c8d-bac5-6a762a27daff container projected-configmap-volume-test: STEP: delete the pod Feb 22 13:34:53.861: INFO: Waiting for pod pod-projected-configmaps-ed2236e5-a6ae-4c8d-bac5-6a762a27daff to disappear Feb 22 13:34:53.919: INFO: Pod pod-projected-configmaps-ed2236e5-a6ae-4c8d-bac5-6a762a27daff no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 22 13:34:53.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8116" for this suite. Feb 22 13:35:00.001: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 22 13:35:00.136: INFO: namespace projected-8116 deletion completed in 6.19195713s • [SLOW TEST:17.362 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 22 13:35:00.137: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Feb 22 13:35:00.255: INFO: Waiting up to 5m0s for pod "downwardapi-volume-12150b61-4b08-4973-85f7-30334e346f1f" in namespace "projected-4226" to be "success or failure" Feb 22 13:35:00.267: INFO: Pod "downwardapi-volume-12150b61-4b08-4973-85f7-30334e346f1f": Phase="Pending", Reason="", readiness=false. Elapsed: 11.203258ms Feb 22 13:35:02.274: INFO: Pod "downwardapi-volume-12150b61-4b08-4973-85f7-30334e346f1f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018228882s Feb 22 13:35:04.284: INFO: Pod "downwardapi-volume-12150b61-4b08-4973-85f7-30334e346f1f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02846151s Feb 22 13:35:06.295: INFO: Pod "downwardapi-volume-12150b61-4b08-4973-85f7-30334e346f1f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.039557462s Feb 22 13:35:08.304: INFO: Pod "downwardapi-volume-12150b61-4b08-4973-85f7-30334e346f1f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.04881166s STEP: Saw pod success Feb 22 13:35:08.305: INFO: Pod "downwardapi-volume-12150b61-4b08-4973-85f7-30334e346f1f" satisfied condition "success or failure" Feb 22 13:35:08.310: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-12150b61-4b08-4973-85f7-30334e346f1f container client-container: STEP: delete the pod Feb 22 13:35:08.381: INFO: Waiting for pod downwardapi-volume-12150b61-4b08-4973-85f7-30334e346f1f to disappear Feb 22 13:35:08.494: INFO: Pod downwardapi-volume-12150b61-4b08-4973-85f7-30334e346f1f no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 22 13:35:08.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4226" for this suite. Feb 22 13:35:16.667: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 22 13:35:16.800: INFO: namespace projected-4226 deletion completed in 8.296386137s • [SLOW TEST:16.663 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 22 13:35:16.800: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-4a192110-5804-4ea4-afcf-f356f15262e7 STEP: Creating a pod to test consume configMaps Feb 22 13:35:16.952: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-3aa91f15-5e62-4dbe-b3f0-0f5e9bd5e443" in namespace "projected-5116" to be "success or failure" Feb 22 13:35:16.982: INFO: Pod "pod-projected-configmaps-3aa91f15-5e62-4dbe-b3f0-0f5e9bd5e443": Phase="Pending", Reason="", readiness=false. Elapsed: 30.267202ms Feb 22 13:35:19.001: INFO: Pod "pod-projected-configmaps-3aa91f15-5e62-4dbe-b3f0-0f5e9bd5e443": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048777669s Feb 22 13:35:21.010: INFO: Pod "pod-projected-configmaps-3aa91f15-5e62-4dbe-b3f0-0f5e9bd5e443": Phase="Pending", Reason="", readiness=false. Elapsed: 4.057928365s Feb 22 13:35:23.018: INFO: Pod "pod-projected-configmaps-3aa91f15-5e62-4dbe-b3f0-0f5e9bd5e443": Phase="Pending", Reason="", readiness=false. Elapsed: 6.066439649s Feb 22 13:35:25.100: INFO: Pod "pod-projected-configmaps-3aa91f15-5e62-4dbe-b3f0-0f5e9bd5e443": Phase="Pending", Reason="", readiness=false. Elapsed: 8.148649117s Feb 22 13:35:27.112: INFO: Pod "pod-projected-configmaps-3aa91f15-5e62-4dbe-b3f0-0f5e9bd5e443": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.159936287s STEP: Saw pod success Feb 22 13:35:27.112: INFO: Pod "pod-projected-configmaps-3aa91f15-5e62-4dbe-b3f0-0f5e9bd5e443" satisfied condition "success or failure" Feb 22 13:35:27.118: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-3aa91f15-5e62-4dbe-b3f0-0f5e9bd5e443 container projected-configmap-volume-test: STEP: delete the pod Feb 22 13:35:27.338: INFO: Waiting for pod pod-projected-configmaps-3aa91f15-5e62-4dbe-b3f0-0f5e9bd5e443 to disappear Feb 22 13:35:27.383: INFO: Pod pod-projected-configmaps-3aa91f15-5e62-4dbe-b3f0-0f5e9bd5e443 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 22 13:35:27.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5116" for this suite. Feb 22 13:35:33.423: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 22 13:35:33.526: INFO: namespace projected-5116 deletion completed in 6.127442155s • [SLOW TEST:16.726 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 22 13:35:33.527: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Feb 22 13:35:33.593: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 22 13:35:41.778: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4976" for this suite. Feb 22 13:36:45.816: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 22 13:36:45.951: INFO: namespace pods-4976 deletion completed in 1m4.16274365s • [SLOW TEST:72.424 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 22 13:36:45.952: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 22 13:36:51.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-6608" for this suite. Feb 22 13:36:57.811: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 22 13:36:57.958: INFO: namespace watch-6608 deletion completed in 6.197398159s • [SLOW TEST:12.006 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 22 13:36:57.959: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:167 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating server pod server in namespace prestop-8607 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-8607 STEP: Deleting pre-stop pod Feb 22 13:37:23.289: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 22 13:37:23.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-8607" for this suite. Feb 22 13:38:03.391: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 22 13:38:03.481: INFO: namespace prestop-8607 deletion completed in 40.16204777s • [SLOW TEST:65.523 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 22 13:38:03.482: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-afea569a-09fd-44d4-9ae5-186491e208e5 STEP: Creating secret with name s-test-opt-upd-2a97d368-b767-42db-80f3-9184b2407b56 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-afea569a-09fd-44d4-9ae5-186491e208e5 STEP: Updating secret s-test-opt-upd-2a97d368-b767-42db-80f3-9184b2407b56 STEP: Creating secret with name s-test-opt-create-c85fac0d-435f-4de9-ba0a-70d7d3a9f96f STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 22 13:39:21.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9879" for this suite. Feb 22 13:39:43.842: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 22 13:39:43.966: INFO: namespace secrets-9879 deletion completed in 22.152819114s • [SLOW TEST:100.484 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 22 13:39:43.967: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Feb 22 13:39:44.111: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Feb 22 13:39:44.118: INFO: Waiting for terminating namespaces to be deleted... Feb 22 13:39:44.121: INFO: Logging pods the kubelet thinks is on node iruya-node before test Feb 22 13:39:44.140: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded) Feb 22 13:39:44.140: INFO: Container weave ready: true, restart count 0 Feb 22 13:39:44.140: INFO: Container weave-npc ready: true, restart count 0 Feb 22 13:39:44.140: INFO: kube-bench-j7kcs from default started at 2020-02-11 06:42:30 +0000 UTC (1 container statuses recorded) Feb 22 13:39:44.140: INFO: Container kube-bench ready: false, restart count 0 Feb 22 13:39:44.140: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded) Feb 22 13:39:44.140: INFO: Container kube-proxy ready: true, restart count 0 Feb 22 13:39:44.140: INFO: Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test Feb 22 13:39:44.150: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded) Feb 22 13:39:44.150: INFO: Container kube-scheduler ready: true, restart count 15 Feb 22 13:39:44.150: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded) Feb 22 13:39:44.150: INFO: Container coredns ready: true, restart count 0 Feb 22 13:39:44.150: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded) Feb 22 13:39:44.150: INFO: Container coredns ready: true, restart count 0 Feb 22 13:39:44.150: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded) Feb 22 13:39:44.150: INFO: Container etcd ready: true, restart count 0 Feb 22 13:39:44.150: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded) Feb 22 13:39:44.150: INFO: Container weave ready: true, restart count 0 Feb 22 13:39:44.150: INFO: Container weave-npc ready: true, restart count 0 Feb 22 13:39:44.150: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded) Feb 22 13:39:44.150: INFO: Container kube-controller-manager ready: true, restart count 23 Feb 22 13:39:44.150: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded) Feb 22 13:39:44.150: INFO: Container kube-proxy ready: true, restart count 0 Feb 22 13:39:44.150: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded) Feb 22 13:39:44.150: INFO: Container kube-apiserver ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-7955a4af-e9ef-4c2c-85ba-91a85c067492 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-7955a4af-e9ef-4c2c-85ba-91a85c067492 off the node iruya-node STEP: verifying the node doesn't have the label kubernetes.io/e2e-7955a4af-e9ef-4c2c-85ba-91a85c067492 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 22 13:40:04.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-5837" for this suite. Feb 22 13:40:34.516: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 22 13:40:34.631: INFO: namespace sched-pred-5837 deletion completed in 30.191823889s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:50.664 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 22 13:40:34.631: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Feb 22 13:40:34.726: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 22 13:40:48.079: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-7309" for this suite. Feb 22 13:40:54.201: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 22 13:40:54.301: INFO: namespace init-container-7309 deletion completed in 6.16111218s • [SLOW TEST:19.670 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 22 13:40:54.302: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token STEP: reading a file in the container Feb 22 13:41:03.004: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1530 pod-service-account-18aba940-92eb-4efa-9085-5921f2997962 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Feb 22 13:41:05.719: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1530 pod-service-account-18aba940-92eb-4efa-9085-5921f2997962 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Feb 22 13:41:06.265: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1530 pod-service-account-18aba940-92eb-4efa-9085-5921f2997962 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 22 13:41:06.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-1530" for this suite. Feb 22 13:41:12.817: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 22 13:41:12.995: INFO: namespace svcaccounts-1530 deletion completed in 6.204897711s • [SLOW TEST:18.694 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 22 13:41:12.996: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-4111, will wait for the garbage collector to delete the pods Feb 22 13:41:27.187: INFO: Deleting Job.batch foo took: 17.036511ms Feb 22 13:41:27.487: INFO: Terminating Job.batch foo pods took: 300.840877ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 22 13:42:06.700: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-4111" for this suite. Feb 22 13:42:12.726: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 22 13:42:12.851: INFO: namespace job-4111 deletion completed in 6.144334179s • [SLOW TEST:59.855 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 22 13:42:12.853: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Feb 22 13:42:24.061: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 22 13:42:25.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-9755" for this suite. Feb 22 13:44:13.119: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 22 13:44:13.251: INFO: namespace replicaset-9755 deletion completed in 1m48.155068142s • [SLOW TEST:120.398 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 22 13:44:13.252: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-fd59bb2b-29b1-4c19-8e71-3501eb0fdb75 STEP: Creating a pod to test consume secrets Feb 22 13:44:13.329: INFO: Waiting up to 5m0s for pod "pod-secrets-d25410f0-0463-4abd-b74c-864b73c5249d" in namespace "secrets-9318" to be "success or failure" Feb 22 13:44:13.383: INFO: Pod "pod-secrets-d25410f0-0463-4abd-b74c-864b73c5249d": Phase="Pending", Reason="", readiness=false. Elapsed: 53.23366ms Feb 22 13:44:15.389: INFO: Pod "pod-secrets-d25410f0-0463-4abd-b74c-864b73c5249d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059533366s Feb 22 13:44:17.676: INFO: Pod "pod-secrets-d25410f0-0463-4abd-b74c-864b73c5249d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.346795234s Feb 22 13:44:19.687: INFO: Pod "pod-secrets-d25410f0-0463-4abd-b74c-864b73c5249d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.357851151s Feb 22 13:44:21.702: INFO: Pod "pod-secrets-d25410f0-0463-4abd-b74c-864b73c5249d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.372250504s Feb 22 13:44:23.759: INFO: Pod "pod-secrets-d25410f0-0463-4abd-b74c-864b73c5249d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.429678516s STEP: Saw pod success Feb 22 13:44:23.759: INFO: Pod "pod-secrets-d25410f0-0463-4abd-b74c-864b73c5249d" satisfied condition "success or failure" Feb 22 13:44:23.765: INFO: Trying to get logs from node iruya-node pod pod-secrets-d25410f0-0463-4abd-b74c-864b73c5249d container secret-volume-test: STEP: delete the pod Feb 22 13:44:23.915: INFO: Waiting for pod pod-secrets-d25410f0-0463-4abd-b74c-864b73c5249d to disappear Feb 22 13:44:23.929: INFO: Pod pod-secrets-d25410f0-0463-4abd-b74c-864b73c5249d no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 22 13:44:23.930: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9318" for this suite. Feb 22 13:44:30.012: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 22 13:44:30.136: INFO: namespace secrets-9318 deletion completed in 6.192367783s • [SLOW TEST:16.885 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 22 13:44:30.137: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Feb 22 13:44:50.272: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 22 13:44:50.311: INFO: Pod pod-with-prestop-http-hook still exists Feb 22 13:44:52.311: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 22 13:44:52.321: INFO: Pod pod-with-prestop-http-hook still exists Feb 22 13:44:54.311: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 22 13:44:54.321: INFO: Pod pod-with-prestop-http-hook still exists Feb 22 13:44:56.312: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 22 13:44:56.352: INFO: Pod pod-with-prestop-http-hook still exists Feb 22 13:44:58.312: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 22 13:44:58.340: INFO: Pod pod-with-prestop-http-hook still exists Feb 22 13:45:00.311: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 22 13:45:00.321: INFO: Pod pod-with-prestop-http-hook still exists Feb 22 13:45:02.311: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 22 13:45:02.329: INFO: Pod pod-with-prestop-http-hook still exists Feb 22 13:45:04.311: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 22 13:45:04.318: INFO: Pod pod-with-prestop-http-hook still exists Feb 22 13:45:06.312: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 22 13:45:06.333: INFO: Pod pod-with-prestop-http-hook still exists Feb 22 13:45:08.312: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 22 13:45:08.322: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 22 13:45:08.373: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-2237" for this suite. Feb 22 13:45:28.415: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 22 13:45:28.623: INFO: namespace container-lifecycle-hook-2237 deletion completed in 20.240678447s • [SLOW TEST:58.487 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 22 13:45:28.625: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-38610d6e-e0e6-42c9-9754-a46d000b7521 in namespace container-probe-1195 Feb 22 13:45:36.778: INFO: Started pod busybox-38610d6e-e0e6-42c9-9754-a46d000b7521 in namespace container-probe-1195 STEP: checking the pod's current state and verifying that restartCount is present Feb 22 13:45:36.782: INFO: Initial restart count of pod busybox-38610d6e-e0e6-42c9-9754-a46d000b7521 is 0 Feb 22 13:46:29.112: INFO: Restart count of pod container-probe-1195/busybox-38610d6e-e0e6-42c9-9754-a46d000b7521 is now 1 (52.330807702s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 22 13:46:29.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1195" for this suite. Feb 22 13:46:35.311: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 22 13:46:35.724: INFO: namespace container-probe-1195 deletion completed in 6.464119692s • [SLOW TEST:67.100 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 22 13:46:35.725: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-e59bb217-f1d0-43cf-b5c0-90bb34aadc1d STEP: Creating a pod to test consume secrets Feb 22 13:46:35.873: INFO: Waiting up to 5m0s for pod "pod-secrets-809ac663-7766-4e49-abf8-bf10f8e861de" in namespace "secrets-7176" to be "success or failure" Feb 22 13:46:35.902: INFO: Pod "pod-secrets-809ac663-7766-4e49-abf8-bf10f8e861de": Phase="Pending", Reason="", readiness=false. Elapsed: 28.764629ms Feb 22 13:46:37.913: INFO: Pod "pod-secrets-809ac663-7766-4e49-abf8-bf10f8e861de": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039730104s Feb 22 13:46:39.928: INFO: Pod "pod-secrets-809ac663-7766-4e49-abf8-bf10f8e861de": Phase="Pending", Reason="", readiness=false. Elapsed: 4.05448486s Feb 22 13:46:41.943: INFO: Pod "pod-secrets-809ac663-7766-4e49-abf8-bf10f8e861de": Phase="Pending", Reason="", readiness=false. Elapsed: 6.069444109s Feb 22 13:46:43.956: INFO: Pod "pod-secrets-809ac663-7766-4e49-abf8-bf10f8e861de": Phase="Pending", Reason="", readiness=false. Elapsed: 8.082623223s Feb 22 13:46:45.970: INFO: Pod "pod-secrets-809ac663-7766-4e49-abf8-bf10f8e861de": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.096645666s STEP: Saw pod success Feb 22 13:46:45.971: INFO: Pod "pod-secrets-809ac663-7766-4e49-abf8-bf10f8e861de" satisfied condition "success or failure" Feb 22 13:46:45.977: INFO: Trying to get logs from node iruya-node pod pod-secrets-809ac663-7766-4e49-abf8-bf10f8e861de container secret-volume-test: STEP: delete the pod Feb 22 13:46:46.103: INFO: Waiting for pod pod-secrets-809ac663-7766-4e49-abf8-bf10f8e861de to disappear Feb 22 13:46:46.123: INFO: Pod pod-secrets-809ac663-7766-4e49-abf8-bf10f8e861de no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 22 13:46:46.123: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7176" for this suite. Feb 22 13:46:52.245: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 22 13:46:52.354: INFO: namespace secrets-7176 deletion completed in 6.222790366s • [SLOW TEST:16.629 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 22 13:46:52.354: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-4308 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-4308 STEP: Creating statefulset with conflicting port in namespace statefulset-4308 STEP: Waiting until pod test-pod will start running in namespace statefulset-4308 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-4308 Feb 22 13:47:04.556: INFO: Observed stateful pod in namespace: statefulset-4308, name: ss-0, uid: 1fc2932b-76bb-4531-b33d-226844bf2bb2, status phase: Pending. Waiting for statefulset controller to delete. Feb 22 13:47:06.496: INFO: Observed stateful pod in namespace: statefulset-4308, name: ss-0, uid: 1fc2932b-76bb-4531-b33d-226844bf2bb2, status phase: Failed. Waiting for statefulset controller to delete. Feb 22 13:47:06.552: INFO: Observed stateful pod in namespace: statefulset-4308, name: ss-0, uid: 1fc2932b-76bb-4531-b33d-226844bf2bb2, status phase: Failed. Waiting for statefulset controller to delete. Feb 22 13:47:06.562: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-4308 STEP: Removing pod with conflicting port in namespace statefulset-4308 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-4308 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Feb 22 13:47:16.928: INFO: Deleting all statefulset in ns statefulset-4308 Feb 22 13:47:16.935: INFO: Scaling statefulset ss to 0 Feb 22 13:47:27.006: INFO: Waiting for statefulset status.replicas updated to 0 Feb 22 13:47:27.013: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 22 13:47:27.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-4308" for this suite. Feb 22 13:47:33.064: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 22 13:47:33.144: INFO: namespace statefulset-4308 deletion completed in 6.104302614s • [SLOW TEST:40.790 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 22 13:47:33.146: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0222 13:47:35.227555 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Feb 22 13:47:35.227: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 22 13:47:35.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8498" for this suite. Feb 22 13:47:41.632: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 22 13:47:41.874: INFO: namespace gc-8498 deletion completed in 6.644068963s • [SLOW TEST:8.728 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 22 13:47:41.875: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-155 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-155 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-155 Feb 22 13:47:42.035: INFO: Found 0 stateful pods, waiting for 1 Feb 22 13:47:52.044: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Feb 22 13:47:52.048: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-155 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 22 13:47:52.929: INFO: stderr: "I0222 13:47:52.374078 335 log.go:172] (0xc0007d6210) (0xc0005b68c0) Create stream\nI0222 13:47:52.374155 335 log.go:172] (0xc0007d6210) (0xc0005b68c0) Stream added, broadcasting: 1\nI0222 13:47:52.383601 335 log.go:172] (0xc0007d6210) Reply frame received for 1\nI0222 13:47:52.383631 335 log.go:172] (0xc0007d6210) (0xc0006d00a0) Create stream\nI0222 13:47:52.383639 335 log.go:172] (0xc0007d6210) (0xc0006d00a0) Stream added, broadcasting: 3\nI0222 13:47:52.384984 335 log.go:172] (0xc0007d6210) Reply frame received for 3\nI0222 13:47:52.385018 335 log.go:172] (0xc0007d6210) (0xc0006d0140) Create stream\nI0222 13:47:52.385027 335 log.go:172] (0xc0007d6210) (0xc0006d0140) Stream added, broadcasting: 5\nI0222 13:47:52.386513 335 log.go:172] (0xc0007d6210) Reply frame received for 5\nI0222 13:47:52.612127 335 log.go:172] (0xc0007d6210) Data frame received for 5\nI0222 13:47:52.612259 335 log.go:172] (0xc0006d0140) (5) Data frame handling\nI0222 13:47:52.612315 335 log.go:172] (0xc0006d0140) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0222 13:47:52.715722 335 log.go:172] (0xc0007d6210) Data frame received for 3\nI0222 13:47:52.715816 335 log.go:172] (0xc0006d00a0) (3) Data frame handling\nI0222 13:47:52.715886 335 log.go:172] (0xc0006d00a0) (3) Data frame sent\nI0222 13:47:52.913051 335 log.go:172] (0xc0007d6210) (0xc0006d00a0) Stream removed, broadcasting: 3\nI0222 13:47:52.913336 335 log.go:172] (0xc0007d6210) Data frame received for 1\nI0222 13:47:52.913539 335 log.go:172] (0xc0005b68c0) (1) Data frame handling\nI0222 13:47:52.913595 335 log.go:172] (0xc0005b68c0) (1) Data frame sent\nI0222 13:47:52.913656 335 log.go:172] (0xc0007d6210) (0xc0005b68c0) Stream removed, broadcasting: 1\nI0222 13:47:52.913819 335 log.go:172] (0xc0007d6210) (0xc0006d0140) Stream removed, broadcasting: 5\nI0222 13:47:52.913858 335 log.go:172] (0xc0007d6210) Go away received\nI0222 13:47:52.915071 335 log.go:172] (0xc0007d6210) (0xc0005b68c0) Stream removed, broadcasting: 1\nI0222 13:47:52.915104 335 log.go:172] (0xc0007d6210) (0xc0006d00a0) Stream removed, broadcasting: 3\nI0222 13:47:52.915126 335 log.go:172] (0xc0007d6210) (0xc0006d0140) Stream removed, broadcasting: 5\n" Feb 22 13:47:52.929: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 22 13:47:52.929: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Feb 22 13:47:52.940: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Feb 22 13:48:02.952: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Feb 22 13:48:02.953: INFO: Waiting for statefulset status.replicas updated to 0 Feb 22 13:48:02.985: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999781s Feb 22 13:48:03.997: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.983221809s Feb 22 13:48:05.011: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.971438762s Feb 22 13:48:06.024: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.957832547s Feb 22 13:48:07.031: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.944035505s Feb 22 13:48:08.041: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.937420632s Feb 22 13:48:09.050: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.927914447s Feb 22 13:48:10.605: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.918502812s Feb 22 13:48:11.614: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.363536214s Feb 22 13:48:12.630: INFO: Verifying statefulset ss doesn't scale past 1 for another 353.907386ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-155 Feb 22 13:48:13.650: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-155 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 22 13:48:14.307: INFO: stderr: "I0222 13:48:13.833638 354 log.go:172] (0xc0007ec420) (0xc00068c820) Create stream\nI0222 13:48:13.833815 354 log.go:172] (0xc0007ec420) (0xc00068c820) Stream added, broadcasting: 1\nI0222 13:48:13.841077 354 log.go:172] (0xc0007ec420) Reply frame received for 1\nI0222 13:48:13.841105 354 log.go:172] (0xc0007ec420) (0xc0003a0460) Create stream\nI0222 13:48:13.841123 354 log.go:172] (0xc0007ec420) (0xc0003a0460) Stream added, broadcasting: 3\nI0222 13:48:13.843961 354 log.go:172] (0xc0007ec420) Reply frame received for 3\nI0222 13:48:13.843997 354 log.go:172] (0xc0007ec420) (0xc000934000) Create stream\nI0222 13:48:13.844008 354 log.go:172] (0xc0007ec420) (0xc000934000) Stream added, broadcasting: 5\nI0222 13:48:13.845062 354 log.go:172] (0xc0007ec420) Reply frame received for 5\nI0222 13:48:14.107648 354 log.go:172] (0xc0007ec420) Data frame received for 3\nI0222 13:48:14.107722 354 log.go:172] (0xc0003a0460) (3) Data frame handling\nI0222 13:48:14.107730 354 log.go:172] (0xc0003a0460) (3) Data frame sent\nI0222 13:48:14.107791 354 log.go:172] (0xc0007ec420) Data frame received for 5\nI0222 13:48:14.107824 354 log.go:172] (0xc000934000) (5) Data frame handling\nI0222 13:48:14.107836 354 log.go:172] (0xc000934000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0222 13:48:14.297833 354 log.go:172] (0xc0007ec420) Data frame received for 1\nI0222 13:48:14.298166 354 log.go:172] (0xc00068c820) (1) Data frame handling\nI0222 13:48:14.298206 354 log.go:172] (0xc00068c820) (1) Data frame sent\nI0222 13:48:14.298255 354 log.go:172] (0xc0007ec420) (0xc00068c820) Stream removed, broadcasting: 1\nI0222 13:48:14.298586 354 log.go:172] (0xc0007ec420) (0xc0003a0460) Stream removed, broadcasting: 3\nI0222 13:48:14.298656 354 log.go:172] (0xc0007ec420) (0xc000934000) Stream removed, broadcasting: 5\nI0222 13:48:14.298694 354 log.go:172] (0xc0007ec420) Go away received\nI0222 13:48:14.298854 354 log.go:172] (0xc0007ec420) (0xc00068c820) Stream removed, broadcasting: 1\nI0222 13:48:14.298923 354 log.go:172] (0xc0007ec420) (0xc0003a0460) Stream removed, broadcasting: 3\nI0222 13:48:14.298963 354 log.go:172] (0xc0007ec420) (0xc000934000) Stream removed, broadcasting: 5\n" Feb 22 13:48:14.309: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Feb 22 13:48:14.309: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Feb 22 13:48:14.321: INFO: Found 1 stateful pods, waiting for 3 Feb 22 13:48:24.332: INFO: Found 2 stateful pods, waiting for 3 Feb 22 13:48:34.330: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Feb 22 13:48:34.330: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Feb 22 13:48:34.330: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false Feb 22 13:48:44.330: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Feb 22 13:48:44.330: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Feb 22 13:48:44.330: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Feb 22 13:48:44.336: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-155 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 22 13:48:45.055: INFO: stderr: "I0222 13:48:44.566147 373 log.go:172] (0xc00096a370) (0xc0008d86e0) Create stream\nI0222 13:48:44.566383 373 log.go:172] (0xc00096a370) (0xc0008d86e0) Stream added, broadcasting: 1\nI0222 13:48:44.574138 373 log.go:172] (0xc00096a370) Reply frame received for 1\nI0222 13:48:44.574173 373 log.go:172] (0xc00096a370) (0xc0008d8780) Create stream\nI0222 13:48:44.574184 373 log.go:172] (0xc00096a370) (0xc0008d8780) Stream added, broadcasting: 3\nI0222 13:48:44.577026 373 log.go:172] (0xc00096a370) Reply frame received for 3\nI0222 13:48:44.577178 373 log.go:172] (0xc00096a370) (0xc0008d8820) Create stream\nI0222 13:48:44.577202 373 log.go:172] (0xc00096a370) (0xc0008d8820) Stream added, broadcasting: 5\nI0222 13:48:44.581899 373 log.go:172] (0xc00096a370) Reply frame received for 5\nI0222 13:48:44.822245 373 log.go:172] (0xc00096a370) Data frame received for 5\nI0222 13:48:44.822329 373 log.go:172] (0xc0008d8820) (5) Data frame handling\nI0222 13:48:44.822366 373 log.go:172] (0xc0008d8820) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0222 13:48:44.822407 373 log.go:172] (0xc00096a370) Data frame received for 3\nI0222 13:48:44.822443 373 log.go:172] (0xc0008d8780) (3) Data frame handling\nI0222 13:48:44.822461 373 log.go:172] (0xc0008d8780) (3) Data frame sent\nI0222 13:48:45.041135 373 log.go:172] (0xc00096a370) Data frame received for 1\nI0222 13:48:45.041209 373 log.go:172] (0xc0008d86e0) (1) Data frame handling\nI0222 13:48:45.041229 373 log.go:172] (0xc0008d86e0) (1) Data frame sent\nI0222 13:48:45.041271 373 log.go:172] (0xc00096a370) (0xc0008d86e0) Stream removed, broadcasting: 1\nI0222 13:48:45.041654 373 log.go:172] (0xc00096a370) (0xc0008d8780) Stream removed, broadcasting: 3\nI0222 13:48:45.041756 373 log.go:172] (0xc00096a370) (0xc0008d8820) Stream removed, broadcasting: 5\nI0222 13:48:45.041813 373 log.go:172] (0xc00096a370) Go away received\nI0222 13:48:45.042286 373 log.go:172] (0xc00096a370) (0xc0008d86e0) Stream removed, broadcasting: 1\nI0222 13:48:45.042369 373 log.go:172] (0xc00096a370) (0xc0008d8780) Stream removed, broadcasting: 3\nI0222 13:48:45.042412 373 log.go:172] (0xc00096a370) (0xc0008d8820) Stream removed, broadcasting: 5\n" Feb 22 13:48:45.056: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 22 13:48:45.056: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Feb 22 13:48:45.057: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-155 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 22 13:48:45.620: INFO: stderr: "I0222 13:48:45.310805 392 log.go:172] (0xc000a94420) (0xc0004ec820) Create stream\nI0222 13:48:45.310885 392 log.go:172] (0xc000a94420) (0xc0004ec820) Stream added, broadcasting: 1\nI0222 13:48:45.324478 392 log.go:172] (0xc000a94420) Reply frame received for 1\nI0222 13:48:45.324546 392 log.go:172] (0xc000a94420) (0xc00054a140) Create stream\nI0222 13:48:45.324564 392 log.go:172] (0xc000a94420) (0xc00054a140) Stream added, broadcasting: 3\nI0222 13:48:45.325874 392 log.go:172] (0xc000a94420) Reply frame received for 3\nI0222 13:48:45.325929 392 log.go:172] (0xc000a94420) (0xc0004ec000) Create stream\nI0222 13:48:45.325945 392 log.go:172] (0xc000a94420) (0xc0004ec000) Stream added, broadcasting: 5\nI0222 13:48:45.327264 392 log.go:172] (0xc000a94420) Reply frame received for 5\nI0222 13:48:45.440245 392 log.go:172] (0xc000a94420) Data frame received for 5\nI0222 13:48:45.440314 392 log.go:172] (0xc0004ec000) (5) Data frame handling\nI0222 13:48:45.440345 392 log.go:172] (0xc0004ec000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0222 13:48:45.529751 392 log.go:172] (0xc000a94420) Data frame received for 3\nI0222 13:48:45.529793 392 log.go:172] (0xc00054a140) (3) Data frame handling\nI0222 13:48:45.529816 392 log.go:172] (0xc00054a140) (3) Data frame sent\nI0222 13:48:45.610586 392 log.go:172] (0xc000a94420) Data frame received for 1\nI0222 13:48:45.610670 392 log.go:172] (0xc000a94420) (0xc00054a140) Stream removed, broadcasting: 3\nI0222 13:48:45.610744 392 log.go:172] (0xc0004ec820) (1) Data frame handling\nI0222 13:48:45.610758 392 log.go:172] (0xc0004ec820) (1) Data frame sent\nI0222 13:48:45.610779 392 log.go:172] (0xc000a94420) (0xc0004ec000) Stream removed, broadcasting: 5\nI0222 13:48:45.610990 392 log.go:172] (0xc000a94420) (0xc0004ec820) Stream removed, broadcasting: 1\nI0222 13:48:45.611084 392 log.go:172] (0xc000a94420) Go away received\nI0222 13:48:45.611757 392 log.go:172] (0xc000a94420) (0xc0004ec820) Stream removed, broadcasting: 1\nI0222 13:48:45.611786 392 log.go:172] (0xc000a94420) (0xc00054a140) Stream removed, broadcasting: 3\nI0222 13:48:45.611803 392 log.go:172] (0xc000a94420) (0xc0004ec000) Stream removed, broadcasting: 5\n" Feb 22 13:48:45.620: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 22 13:48:45.620: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Feb 22 13:48:45.620: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-155 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 22 13:48:46.058: INFO: stderr: "I0222 13:48:45.750777 412 log.go:172] (0xc00012ae70) (0xc000630780) Create stream\nI0222 13:48:45.750862 412 log.go:172] (0xc00012ae70) (0xc000630780) Stream added, broadcasting: 1\nI0222 13:48:45.756742 412 log.go:172] (0xc00012ae70) Reply frame received for 1\nI0222 13:48:45.756762 412 log.go:172] (0xc00012ae70) (0xc000630820) Create stream\nI0222 13:48:45.756767 412 log.go:172] (0xc00012ae70) (0xc000630820) Stream added, broadcasting: 3\nI0222 13:48:45.758040 412 log.go:172] (0xc00012ae70) Reply frame received for 3\nI0222 13:48:45.758055 412 log.go:172] (0xc00012ae70) (0xc0007a4000) Create stream\nI0222 13:48:45.758061 412 log.go:172] (0xc00012ae70) (0xc0007a4000) Stream added, broadcasting: 5\nI0222 13:48:45.759351 412 log.go:172] (0xc00012ae70) Reply frame received for 5\nI0222 13:48:45.854470 412 log.go:172] (0xc00012ae70) Data frame received for 5\nI0222 13:48:45.854497 412 log.go:172] (0xc0007a4000) (5) Data frame handling\nI0222 13:48:45.854509 412 log.go:172] (0xc0007a4000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0222 13:48:45.881244 412 log.go:172] (0xc00012ae70) Data frame received for 3\nI0222 13:48:45.881265 412 log.go:172] (0xc000630820) (3) Data frame handling\nI0222 13:48:45.881277 412 log.go:172] (0xc000630820) (3) Data frame sent\nI0222 13:48:46.051586 412 log.go:172] (0xc00012ae70) (0xc000630820) Stream removed, broadcasting: 3\nI0222 13:48:46.051667 412 log.go:172] (0xc00012ae70) Data frame received for 1\nI0222 13:48:46.051685 412 log.go:172] (0xc000630780) (1) Data frame handling\nI0222 13:48:46.051697 412 log.go:172] (0xc000630780) (1) Data frame sent\nI0222 13:48:46.051755 412 log.go:172] (0xc00012ae70) (0xc000630780) Stream removed, broadcasting: 1\nI0222 13:48:46.051769 412 log.go:172] (0xc00012ae70) (0xc0007a4000) Stream removed, broadcasting: 5\nI0222 13:48:46.051797 412 log.go:172] (0xc00012ae70) Go away received\nI0222 13:48:46.052004 412 log.go:172] (0xc00012ae70) (0xc000630780) Stream removed, broadcasting: 1\nI0222 13:48:46.052014 412 log.go:172] (0xc00012ae70) (0xc000630820) Stream removed, broadcasting: 3\nI0222 13:48:46.052018 412 log.go:172] (0xc00012ae70) (0xc0007a4000) Stream removed, broadcasting: 5\n" Feb 22 13:48:46.058: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 22 13:48:46.058: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Feb 22 13:48:46.058: INFO: Waiting for statefulset status.replicas updated to 0 Feb 22 13:48:46.071: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Feb 22 13:48:56.102: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Feb 22 13:48:56.102: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Feb 22 13:48:56.102: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Feb 22 13:48:56.119: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999762s Feb 22 13:48:57.244: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.989160868s Feb 22 13:48:58.254: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.864153084s Feb 22 13:48:59.276: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.854283106s Feb 22 13:49:00.290: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.83211138s Feb 22 13:49:01.302: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.818118736s Feb 22 13:49:02.310: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.80540671s Feb 22 13:49:03.328: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.797509997s Feb 22 13:49:08.267: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.779627131s STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-155 Feb 22 13:49:09.285: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-155 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 22 13:49:09.844: INFO: stderr: "I0222 13:49:09.552753 431 log.go:172] (0xc000a74370) (0xc0002e0820) Create stream\nI0222 13:49:09.552984 431 log.go:172] (0xc000a74370) (0xc0002e0820) Stream added, broadcasting: 1\nI0222 13:49:09.572657 431 log.go:172] (0xc000a74370) Reply frame received for 1\nI0222 13:49:09.572750 431 log.go:172] (0xc000a74370) (0xc0005643c0) Create stream\nI0222 13:49:09.572772 431 log.go:172] (0xc000a74370) (0xc0005643c0) Stream added, broadcasting: 3\nI0222 13:49:09.574769 431 log.go:172] (0xc000a74370) Reply frame received for 3\nI0222 13:49:09.574818 431 log.go:172] (0xc000a74370) (0xc0002e0000) Create stream\nI0222 13:49:09.574845 431 log.go:172] (0xc000a74370) (0xc0002e0000) Stream added, broadcasting: 5\nI0222 13:49:09.578257 431 log.go:172] (0xc000a74370) Reply frame received for 5\nI0222 13:49:09.673440 431 log.go:172] (0xc000a74370) Data frame received for 5\nI0222 13:49:09.673791 431 log.go:172] (0xc0002e0000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0222 13:49:09.673902 431 log.go:172] (0xc000a74370) Data frame received for 3\nI0222 13:49:09.674153 431 log.go:172] (0xc0005643c0) (3) Data frame handling\nI0222 13:49:09.674203 431 log.go:172] (0xc0005643c0) (3) Data frame sent\nI0222 13:49:09.674257 431 log.go:172] (0xc0002e0000) (5) Data frame sent\nI0222 13:49:09.831393 431 log.go:172] (0xc000a74370) Data frame received for 1\nI0222 13:49:09.831910 431 log.go:172] (0xc000a74370) (0xc0005643c0) Stream removed, broadcasting: 3\nI0222 13:49:09.832079 431 log.go:172] (0xc000a74370) (0xc0002e0000) Stream removed, broadcasting: 5\nI0222 13:49:09.832168 431 log.go:172] (0xc0002e0820) (1) Data frame handling\nI0222 13:49:09.832204 431 log.go:172] (0xc0002e0820) (1) Data frame sent\nI0222 13:49:09.832225 431 log.go:172] (0xc000a74370) (0xc0002e0820) Stream removed, broadcasting: 1\nI0222 13:49:09.832244 431 log.go:172] (0xc000a74370) Go away received\nI0222 13:49:09.833043 431 log.go:172] (0xc000a74370) (0xc0002e0820) Stream removed, broadcasting: 1\nI0222 13:49:09.833061 431 log.go:172] (0xc000a74370) (0xc0005643c0) Stream removed, broadcasting: 3\nI0222 13:49:09.833073 431 log.go:172] (0xc000a74370) (0xc0002e0000) Stream removed, broadcasting: 5\n" Feb 22 13:49:09.845: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Feb 22 13:49:09.845: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Feb 22 13:49:09.845: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-155 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 22 13:49:10.197: INFO: stderr: "I0222 13:49:10.007242 454 log.go:172] (0xc0008400b0) (0xc00076c6e0) Create stream\nI0222 13:49:10.007371 454 log.go:172] (0xc0008400b0) (0xc00076c6e0) Stream added, broadcasting: 1\nI0222 13:49:10.010739 454 log.go:172] (0xc0008400b0) Reply frame received for 1\nI0222 13:49:10.010771 454 log.go:172] (0xc0008400b0) (0xc000554280) Create stream\nI0222 13:49:10.010778 454 log.go:172] (0xc0008400b0) (0xc000554280) Stream added, broadcasting: 3\nI0222 13:49:10.011476 454 log.go:172] (0xc0008400b0) Reply frame received for 3\nI0222 13:49:10.011492 454 log.go:172] (0xc0008400b0) (0xc000554320) Create stream\nI0222 13:49:10.011497 454 log.go:172] (0xc0008400b0) (0xc000554320) Stream added, broadcasting: 5\nI0222 13:49:10.012236 454 log.go:172] (0xc0008400b0) Reply frame received for 5\nI0222 13:49:10.103684 454 log.go:172] (0xc0008400b0) Data frame received for 5\nI0222 13:49:10.103725 454 log.go:172] (0xc000554320) (5) Data frame handling\nI0222 13:49:10.103736 454 log.go:172] (0xc000554320) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0222 13:49:10.103757 454 log.go:172] (0xc0008400b0) Data frame received for 3\nI0222 13:49:10.103761 454 log.go:172] (0xc000554280) (3) Data frame handling\nI0222 13:49:10.103771 454 log.go:172] (0xc000554280) (3) Data frame sent\nI0222 13:49:10.189173 454 log.go:172] (0xc0008400b0) (0xc000554280) Stream removed, broadcasting: 3\nI0222 13:49:10.189315 454 log.go:172] (0xc0008400b0) Data frame received for 1\nI0222 13:49:10.189354 454 log.go:172] (0xc0008400b0) (0xc000554320) Stream removed, broadcasting: 5\nI0222 13:49:10.189414 454 log.go:172] (0xc00076c6e0) (1) Data frame handling\nI0222 13:49:10.189432 454 log.go:172] (0xc00076c6e0) (1) Data frame sent\nI0222 13:49:10.189446 454 log.go:172] (0xc0008400b0) (0xc00076c6e0) Stream removed, broadcasting: 1\nI0222 13:49:10.189459 454 log.go:172] (0xc0008400b0) Go away received\nI0222 13:49:10.189944 454 log.go:172] (0xc0008400b0) (0xc00076c6e0) Stream removed, broadcasting: 1\nI0222 13:49:10.190058 454 log.go:172] (0xc0008400b0) (0xc000554280) Stream removed, broadcasting: 3\nI0222 13:49:10.190083 454 log.go:172] (0xc0008400b0) (0xc000554320) Stream removed, broadcasting: 5\n" Feb 22 13:49:10.198: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Feb 22 13:49:10.198: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Feb 22 13:49:10.198: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-155 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 22 13:49:10.900: INFO: stderr: "I0222 13:49:10.446384 470 log.go:172] (0xc000116dc0) (0xc00039a780) Create stream\nI0222 13:49:10.446587 470 log.go:172] (0xc000116dc0) (0xc00039a780) Stream added, broadcasting: 1\nI0222 13:49:10.464614 470 log.go:172] (0xc000116dc0) Reply frame received for 1\nI0222 13:49:10.464691 470 log.go:172] (0xc000116dc0) (0xc0003efb80) Create stream\nI0222 13:49:10.464704 470 log.go:172] (0xc000116dc0) (0xc0003efb80) Stream added, broadcasting: 3\nI0222 13:49:10.474750 470 log.go:172] (0xc000116dc0) Reply frame received for 3\nI0222 13:49:10.474786 470 log.go:172] (0xc000116dc0) (0xc0005e03c0) Create stream\nI0222 13:49:10.474799 470 log.go:172] (0xc000116dc0) (0xc0005e03c0) Stream added, broadcasting: 5\nI0222 13:49:10.476848 470 log.go:172] (0xc000116dc0) Reply frame received for 5\nI0222 13:49:10.723239 470 log.go:172] (0xc000116dc0) Data frame received for 5\nI0222 13:49:10.723314 470 log.go:172] (0xc0005e03c0) (5) Data frame handling\nI0222 13:49:10.723343 470 log.go:172] (0xc0005e03c0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0222 13:49:10.723415 470 log.go:172] (0xc000116dc0) Data frame received for 3\nI0222 13:49:10.723429 470 log.go:172] (0xc0003efb80) (3) Data frame handling\nI0222 13:49:10.723455 470 log.go:172] (0xc0003efb80) (3) Data frame sent\nI0222 13:49:10.885506 470 log.go:172] (0xc000116dc0) Data frame received for 1\nI0222 13:49:10.885594 470 log.go:172] (0xc00039a780) (1) Data frame handling\nI0222 13:49:10.885630 470 log.go:172] (0xc00039a780) (1) Data frame sent\nI0222 13:49:10.885651 470 log.go:172] (0xc000116dc0) (0xc00039a780) Stream removed, broadcasting: 1\nI0222 13:49:10.886059 470 log.go:172] (0xc000116dc0) (0xc0003efb80) Stream removed, broadcasting: 3\nI0222 13:49:10.886905 470 log.go:172] (0xc000116dc0) (0xc0005e03c0) Stream removed, broadcasting: 5\nI0222 13:49:10.887052 470 log.go:172] (0xc000116dc0) (0xc00039a780) Stream removed, broadcasting: 1\nI0222 13:49:10.887119 470 log.go:172] (0xc000116dc0) (0xc0003efb80) Stream removed, broadcasting: 3\nI0222 13:49:10.887162 470 log.go:172] (0xc000116dc0) (0xc0005e03c0) Stream removed, broadcasting: 5\nI0222 13:49:10.887641 470 log.go:172] (0xc000116dc0) Go away received\n" Feb 22 13:49:10.901: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Feb 22 13:49:10.901: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Feb 22 13:49:10.901: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Feb 22 13:49:51.002: INFO: Deleting all statefulset in ns statefulset-155 Feb 22 13:49:51.008: INFO: Scaling statefulset ss to 0 Feb 22 13:49:51.025: INFO: Waiting for statefulset status.replicas updated to 0 Feb 22 13:49:51.029: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 22 13:49:51.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-155" for this suite. Feb 22 13:49:57.114: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 22 13:49:57.199: INFO: namespace statefulset-155 deletion completed in 6.135243379s • [SLOW TEST:135.324 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 22 13:49:57.200: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 22 13:50:09.368: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-3959" for this suite. Feb 22 13:50:15.451: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 22 13:50:15.620: INFO: namespace kubelet-test-3959 deletion completed in 6.244977442s • [SLOW TEST:18.421 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 22 13:50:15.621: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-e1ba3c9e-14db-455d-970e-76c3885977be STEP: Creating a pod to test consume secrets Feb 22 13:50:15.757: INFO: Waiting up to 5m0s for pod "pod-secrets-c7630d6c-bb26-4647-b1dd-2e1e29f2229e" in namespace "secrets-6208" to be "success or failure" Feb 22 13:50:15.783: INFO: Pod "pod-secrets-c7630d6c-bb26-4647-b1dd-2e1e29f2229e": Phase="Pending", Reason="", readiness=false. Elapsed: 25.562318ms Feb 22 13:50:17.801: INFO: Pod "pod-secrets-c7630d6c-bb26-4647-b1dd-2e1e29f2229e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043625405s Feb 22 13:50:19.809: INFO: Pod "pod-secrets-c7630d6c-bb26-4647-b1dd-2e1e29f2229e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.051910142s Feb 22 13:50:21.829: INFO: Pod "pod-secrets-c7630d6c-bb26-4647-b1dd-2e1e29f2229e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.071124866s Feb 22 13:50:23.852: INFO: Pod "pod-secrets-c7630d6c-bb26-4647-b1dd-2e1e29f2229e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.094248244s Feb 22 13:50:25.867: INFO: Pod "pod-secrets-c7630d6c-bb26-4647-b1dd-2e1e29f2229e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.109754654s STEP: Saw pod success Feb 22 13:50:25.867: INFO: Pod "pod-secrets-c7630d6c-bb26-4647-b1dd-2e1e29f2229e" satisfied condition "success or failure" Feb 22 13:50:25.875: INFO: Trying to get logs from node iruya-node pod pod-secrets-c7630d6c-bb26-4647-b1dd-2e1e29f2229e container secret-volume-test: STEP: delete the pod Feb 22 13:50:25.951: INFO: Waiting for pod pod-secrets-c7630d6c-bb26-4647-b1dd-2e1e29f2229e to disappear Feb 22 13:50:25.968: INFO: Pod pod-secrets-c7630d6c-bb26-4647-b1dd-2e1e29f2229e no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 22 13:50:25.969: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6208" for this suite. Feb 22 13:50:32.032: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 22 13:50:32.144: INFO: namespace secrets-6208 deletion completed in 6.164476981s • [SLOW TEST:16.523 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 22 13:50:32.144: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1612 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Feb 22 13:50:32.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-7550' Feb 22 13:50:32.413: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Feb 22 13:50:32.413: INFO: stdout: "job.batch/e2e-test-nginx-job created\n" STEP: verifying the job e2e-test-nginx-job was created [AfterEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1617 Feb 22 13:50:32.428: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=kubectl-7550' Feb 22 13:50:32.655: INFO: stderr: "" Feb 22 13:50:32.655: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 22 13:50:32.656: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7550" for this suite. Feb 22 13:50:38.682: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 22 13:50:38.769: INFO: namespace kubectl-7550 deletion completed in 6.106700624s • [SLOW TEST:6.625 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 22 13:50:38.769: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Feb 22 13:50:38.944: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-5493,SelfLink:/api/v1/namespaces/watch-5493/configmaps/e2e-watch-test-resource-version,UID:aa5d5743-3aed-450c-bd9d-57e13d839988,ResourceVersion:25329044,Generation:0,CreationTimestamp:2020-02-22 13:50:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Feb 22 13:50:38.944: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-5493,SelfLink:/api/v1/namespaces/watch-5493/configmaps/e2e-watch-test-resource-version,UID:aa5d5743-3aed-450c-bd9d-57e13d839988,ResourceVersion:25329045,Generation:0,CreationTimestamp:2020-02-22 13:50:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 22 13:50:38.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-5493" for this suite. Feb 22 13:50:45.042: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 22 13:50:45.165: INFO: namespace watch-5493 deletion completed in 6.206677138s • [SLOW TEST:6.396 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 22 13:50:45.169: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating all guestbook components Feb 22 13:50:45.240: INFO: apiVersion: v1 kind: Service metadata: name: redis-slave labels: app: redis role: slave tier: backend spec: ports: - port: 6379 selector: app: redis role: slave tier: backend Feb 22 13:50:45.240: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9638' Feb 22 13:50:45.799: INFO: stderr: "" Feb 22 13:50:45.799: INFO: stdout: "service/redis-slave created\n" Feb 22 13:50:45.800: INFO: apiVersion: v1 kind: Service metadata: name: redis-master labels: app: redis role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: redis role: master tier: backend Feb 22 13:50:45.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9638' Feb 22 13:50:46.408: INFO: stderr: "" Feb 22 13:50:46.409: INFO: stdout: "service/redis-master created\n" Feb 22 13:50:46.411: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Feb 22 13:50:46.411: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9638' Feb 22 13:50:46.823: INFO: stderr: "" Feb 22 13:50:46.824: INFO: stdout: "service/frontend created\n" Feb 22 13:50:46.825: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v6 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access environment variables to find service host # info, comment out the 'value: dns' line above, and uncomment the # line below: # value: env ports: - containerPort: 80 Feb 22 13:50:46.826: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9638' Feb 22 13:50:47.398: INFO: stderr: "" Feb 22 13:50:47.398: INFO: stdout: "deployment.apps/frontend created\n" Feb 22 13:50:47.399: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-master spec: replicas: 1 selector: matchLabels: app: redis role: master tier: backend template: metadata: labels: app: redis role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/redis:1.0 resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Feb 22 13:50:47.399: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9638' Feb 22 13:50:47.920: INFO: stderr: "" Feb 22 13:50:47.920: INFO: stdout: "deployment.apps/redis-master created\n" Feb 22 13:50:47.921: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-slave spec: replicas: 2 selector: matchLabels: app: redis role: slave tier: backend template: metadata: labels: app: redis role: slave tier: backend spec: containers: - name: slave image: gcr.io/google-samples/gb-redisslave:v3 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access an environment variable to find the master # service's host, comment out the 'value: dns' line above, and # uncomment the line below: # value: env ports: - containerPort: 6379 Feb 22 13:50:47.922: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9638' Feb 22 13:50:49.453: INFO: stderr: "" Feb 22 13:50:49.454: INFO: stdout: "deployment.apps/redis-slave created\n" STEP: validating guestbook app Feb 22 13:50:49.454: INFO: Waiting for all frontend pods to be Running. Feb 22 13:51:14.508: INFO: Waiting for frontend to serve content. Feb 22 13:51:15.059: INFO: Trying to add a new entry to the guestbook. Feb 22 13:51:15.115: INFO: Verifying that added entry can be retrieved. Feb 22 13:51:17.197: INFO: Failed to get response from guestbook. err: , response: {"data": ""} STEP: using delete to clean up resources Feb 22 13:51:22.283: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9638' Feb 22 13:51:24.321: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 22 13:51:24.321: INFO: stdout: "service \"redis-slave\" force deleted\n" STEP: using delete to clean up resources Feb 22 13:51:24.322: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9638' Feb 22 13:51:24.612: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 22 13:51:24.613: INFO: stdout: "service \"redis-master\" force deleted\n" STEP: using delete to clean up resources Feb 22 13:51:24.614: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9638' Feb 22 13:51:24.757: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 22 13:51:24.757: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Feb 22 13:51:24.759: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9638' Feb 22 13:51:24.867: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 22 13:51:24.867: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Feb 22 13:51:24.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9638' Feb 22 13:51:24.989: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 22 13:51:24.989: INFO: stdout: "deployment.apps \"redis-master\" force deleted\n" STEP: using delete to clean up resources Feb 22 13:51:24.990: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9638' Feb 22 13:51:25.120: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 22 13:51:25.120: INFO: stdout: "deployment.apps \"redis-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 22 13:51:25.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9638" for this suite. Feb 22 13:52:09.374: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 22 13:52:09.522: INFO: namespace kubectl-9638 deletion completed in 44.296978068s • [SLOW TEST:84.353 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 22 13:52:09.523: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1685 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Feb 22 13:52:09.664: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-8118' Feb 22 13:52:09.847: INFO: stderr: "" Feb 22 13:52:09.847: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod was created [AfterEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1690 Feb 22 13:52:09.907: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-8118' Feb 22 13:52:13.921: INFO: stderr: "" Feb 22 13:52:13.921: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 22 13:52:13.921: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8118" for this suite. Feb 22 13:52:20.007: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 22 13:52:20.285: INFO: namespace kubectl-8118 deletion completed in 6.311344254s • [SLOW TEST:10.762 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 22 13:52:20.286: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-9cc848c9-6d12-41bf-be92-1badad6e7b7c STEP: Creating a pod to test consume secrets Feb 22 13:52:20.399: INFO: Waiting up to 5m0s for pod "pod-secrets-6caf79ef-e318-4092-ae26-85dadb339f77" in namespace "secrets-9881" to be "success or failure" Feb 22 13:52:20.409: INFO: Pod "pod-secrets-6caf79ef-e318-4092-ae26-85dadb339f77": Phase="Pending", Reason="", readiness=false. Elapsed: 9.936719ms Feb 22 13:52:22.420: INFO: Pod "pod-secrets-6caf79ef-e318-4092-ae26-85dadb339f77": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020588774s Feb 22 13:52:24.428: INFO: Pod "pod-secrets-6caf79ef-e318-4092-ae26-85dadb339f77": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028717156s Feb 22 13:52:26.440: INFO: Pod "pod-secrets-6caf79ef-e318-4092-ae26-85dadb339f77": Phase="Pending", Reason="", readiness=false. Elapsed: 6.040956541s Feb 22 13:52:28.453: INFO: Pod "pod-secrets-6caf79ef-e318-4092-ae26-85dadb339f77": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.0536885s STEP: Saw pod success Feb 22 13:52:28.453: INFO: Pod "pod-secrets-6caf79ef-e318-4092-ae26-85dadb339f77" satisfied condition "success or failure" Feb 22 13:52:28.458: INFO: Trying to get logs from node iruya-node pod pod-secrets-6caf79ef-e318-4092-ae26-85dadb339f77 container secret-env-test: STEP: delete the pod Feb 22 13:52:28.536: INFO: Waiting for pod pod-secrets-6caf79ef-e318-4092-ae26-85dadb339f77 to disappear Feb 22 13:52:28.541: INFO: Pod pod-secrets-6caf79ef-e318-4092-ae26-85dadb339f77 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 22 13:52:28.541: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9881" for this suite. Feb 22 13:52:34.567: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 22 13:52:34.697: INFO: namespace secrets-9881 deletion completed in 6.151057079s • [SLOW TEST:14.411 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 22 13:52:34.698: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Feb 22 13:52:42.799: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-4f20e9bf-ee2c-417f-b834-227351dff2a3,GenerateName:,Namespace:events-1001,SelfLink:/api/v1/namespaces/events-1001/pods/send-events-4f20e9bf-ee2c-417f-b834-227351dff2a3,UID:e112042c-5b68-4913-a251-81dfe5197777,ResourceVersion:25329458,Generation:0,CreationTimestamp:2020-02-22 13:52:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 744154886,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-j8c7c {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-j8c7c,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] [] [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-j8c7c true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001d4a2e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001d4a300}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-22 13:52:34 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-22 13:52:42 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-22 13:52:42 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-22 13:52:34 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2020-02-22 13:52:34 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-02-22 13:52:41 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 docker-pullable://gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 docker://9a6dc2c1db0b16673e551eb11c667ca0f4c7283ea1d0000737756d9ca90f1962}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} STEP: checking for scheduler event about the pod Feb 22 13:52:44.807: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Feb 22 13:52:46.814: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 22 13:52:46.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-1001" for this suite. Feb 22 13:53:26.886: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 22 13:53:26.965: INFO: namespace events-1001 deletion completed in 40.127034236s • [SLOW TEST:52.267 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 22 13:53:26.965: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating replication controller svc-latency-rc in namespace svc-latency-9488 I0222 13:53:27.066709 8 runners.go:180] Created replication controller with name: svc-latency-rc, namespace: svc-latency-9488, replica count: 1 I0222 13:53:28.118013 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0222 13:53:29.118978 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0222 13:53:30.119538 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0222 13:53:31.120267 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0222 13:53:32.120984 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0222 13:53:33.122103 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0222 13:53:34.122862 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0222 13:53:35.123551 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0222 13:53:36.124254 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Feb 22 13:53:36.325: INFO: Created: latency-svc-56sq8 Feb 22 13:53:36.338: INFO: Got endpoints: latency-svc-56sq8 [113.686111ms] Feb 22 13:53:36.535: INFO: Created: latency-svc-sm5ll Feb 22 13:53:36.550: INFO: Got endpoints: latency-svc-sm5ll [210.0959ms] Feb 22 13:53:36.669: INFO: Created: latency-svc-d2g5b Feb 22 13:53:36.744: INFO: Got endpoints: latency-svc-d2g5b [404.094618ms] Feb 22 13:53:36.840: INFO: Created: latency-svc-ctff2 Feb 22 13:53:36.845: INFO: Got endpoints: latency-svc-ctff2 [506.411886ms] Feb 22 13:53:36.928: INFO: Created: latency-svc-zbxww Feb 22 13:53:37.038: INFO: Got endpoints: latency-svc-zbxww [696.958608ms] Feb 22 13:53:37.058: INFO: Created: latency-svc-bw4kn Feb 22 13:53:37.069: INFO: Got endpoints: latency-svc-bw4kn [728.011991ms] Feb 22 13:53:37.119: INFO: Created: latency-svc-6dqcz Feb 22 13:53:37.123: INFO: Got endpoints: latency-svc-6dqcz [784.033083ms] Feb 22 13:53:37.231: INFO: Created: latency-svc-mrl7j Feb 22 13:53:37.236: INFO: Got endpoints: latency-svc-mrl7j [896.202376ms] Feb 22 13:53:37.290: INFO: Created: latency-svc-497p9 Feb 22 13:53:37.302: INFO: Got endpoints: latency-svc-497p9 [961.238907ms] Feb 22 13:53:37.401: INFO: Created: latency-svc-xvddb Feb 22 13:53:37.454: INFO: Got endpoints: latency-svc-xvddb [1.112557628s] Feb 22 13:53:37.473: INFO: Created: latency-svc-grc7d Feb 22 13:53:37.479: INFO: Got endpoints: latency-svc-grc7d [1.137765336s] Feb 22 13:53:37.585: INFO: Created: latency-svc-lqvkl Feb 22 13:53:37.628: INFO: Got endpoints: latency-svc-lqvkl [1.287018034s] Feb 22 13:53:37.676: INFO: Created: latency-svc-hcfzf Feb 22 13:53:37.771: INFO: Got endpoints: latency-svc-hcfzf [1.429652136s] Feb 22 13:53:37.772: INFO: Created: latency-svc-5jqdb Feb 22 13:53:37.811: INFO: Got endpoints: latency-svc-5jqdb [1.469604639s] Feb 22 13:53:37.926: INFO: Created: latency-svc-6bmws Feb 22 13:53:37.942: INFO: Got endpoints: latency-svc-6bmws [1.601805611s] Feb 22 13:53:38.009: INFO: Created: latency-svc-tpq25 Feb 22 13:53:38.014: INFO: Got endpoints: latency-svc-tpq25 [1.674574146s] Feb 22 13:53:38.132: INFO: Created: latency-svc-4wxd9 Feb 22 13:53:38.132: INFO: Got endpoints: latency-svc-4wxd9 [1.58166527s] Feb 22 13:53:38.195: INFO: Created: latency-svc-7jjg9 Feb 22 13:53:38.266: INFO: Got endpoints: latency-svc-7jjg9 [1.521139528s] Feb 22 13:53:38.280: INFO: Created: latency-svc-gb495 Feb 22 13:53:38.293: INFO: Got endpoints: latency-svc-gb495 [1.447898144s] Feb 22 13:53:38.353: INFO: Created: latency-svc-5klw2 Feb 22 13:53:38.429: INFO: Got endpoints: latency-svc-5klw2 [1.390704453s] Feb 22 13:53:38.439: INFO: Created: latency-svc-crqbs Feb 22 13:53:38.454: INFO: Got endpoints: latency-svc-crqbs [1.385551387s] Feb 22 13:53:38.491: INFO: Created: latency-svc-lfb2s Feb 22 13:53:38.510: INFO: Got endpoints: latency-svc-lfb2s [1.386060693s] Feb 22 13:53:38.668: INFO: Created: latency-svc-x54xw Feb 22 13:53:38.748: INFO: Got endpoints: latency-svc-x54xw [1.511912775s] Feb 22 13:53:38.823: INFO: Created: latency-svc-v8s4k Feb 22 13:53:38.857: INFO: Got endpoints: latency-svc-v8s4k [1.555181054s] Feb 22 13:53:38.905: INFO: Created: latency-svc-xq44h Feb 22 13:53:38.974: INFO: Got endpoints: latency-svc-xq44h [1.519641976s] Feb 22 13:53:39.017: INFO: Created: latency-svc-zdgmg Feb 22 13:53:39.038: INFO: Got endpoints: latency-svc-zdgmg [1.558728395s] Feb 22 13:53:39.136: INFO: Created: latency-svc-7jzj5 Feb 22 13:53:39.232: INFO: Created: latency-svc-ksw52 Feb 22 13:53:39.272: INFO: Got endpoints: latency-svc-7jzj5 [1.643611276s] Feb 22 13:53:39.373: INFO: Got endpoints: latency-svc-ksw52 [1.601639471s] Feb 22 13:53:39.438: INFO: Created: latency-svc-n562m Feb 22 13:53:39.459: INFO: Got endpoints: latency-svc-n562m [1.647678791s] Feb 22 13:53:39.567: INFO: Created: latency-svc-m9k7n Feb 22 13:53:39.576: INFO: Got endpoints: latency-svc-m9k7n [1.632568559s] Feb 22 13:53:39.621: INFO: Created: latency-svc-nxp4c Feb 22 13:53:39.708: INFO: Got endpoints: latency-svc-nxp4c [1.693440694s] Feb 22 13:53:39.764: INFO: Created: latency-svc-m8jfr Feb 22 13:53:39.788: INFO: Got endpoints: latency-svc-m8jfr [1.65561993s] Feb 22 13:53:39.878: INFO: Created: latency-svc-w996v Feb 22 13:53:39.878: INFO: Got endpoints: latency-svc-w996v [1.611927474s] Feb 22 13:53:39.947: INFO: Created: latency-svc-6krnz Feb 22 13:53:39.958: INFO: Got endpoints: latency-svc-6krnz [1.663798543s] Feb 22 13:53:40.076: INFO: Created: latency-svc-72g92 Feb 22 13:53:40.124: INFO: Got endpoints: latency-svc-72g92 [1.695224753s] Feb 22 13:53:40.204: INFO: Created: latency-svc-m5fk5 Feb 22 13:53:40.209: INFO: Got endpoints: latency-svc-m5fk5 [1.754614388s] Feb 22 13:53:40.249: INFO: Created: latency-svc-jkqsv Feb 22 13:53:40.258: INFO: Got endpoints: latency-svc-jkqsv [1.747745272s] Feb 22 13:53:40.494: INFO: Created: latency-svc-gpgw7 Feb 22 13:53:40.512: INFO: Got endpoints: latency-svc-gpgw7 [1.763893376s] Feb 22 13:53:40.713: INFO: Created: latency-svc-6jz4f Feb 22 13:53:40.720: INFO: Got endpoints: latency-svc-6jz4f [1.861645624s] Feb 22 13:53:40.770: INFO: Created: latency-svc-8tv6f Feb 22 13:53:40.793: INFO: Got endpoints: latency-svc-8tv6f [1.818736604s] Feb 22 13:53:40.876: INFO: Created: latency-svc-ngqkc Feb 22 13:53:40.887: INFO: Got endpoints: latency-svc-ngqkc [1.84842557s] Feb 22 13:53:40.970: INFO: Created: latency-svc-whrvt Feb 22 13:53:41.039: INFO: Got endpoints: latency-svc-whrvt [1.765958764s] Feb 22 13:53:41.072: INFO: Created: latency-svc-dzmdq Feb 22 13:53:41.080: INFO: Got endpoints: latency-svc-dzmdq [1.706669248s] Feb 22 13:53:41.128: INFO: Created: latency-svc-dmhft Feb 22 13:53:41.191: INFO: Got endpoints: latency-svc-dmhft [1.73186445s] Feb 22 13:53:41.254: INFO: Created: latency-svc-568gt Feb 22 13:53:41.255: INFO: Got endpoints: latency-svc-568gt [1.678715055s] Feb 22 13:53:41.357: INFO: Created: latency-svc-4tm7g Feb 22 13:53:41.372: INFO: Got endpoints: latency-svc-4tm7g [1.663272021s] Feb 22 13:53:41.425: INFO: Created: latency-svc-wlfvs Feb 22 13:53:41.445: INFO: Got endpoints: latency-svc-wlfvs [1.656926886s] Feb 22 13:53:41.513: INFO: Created: latency-svc-f925b Feb 22 13:53:41.519: INFO: Got endpoints: latency-svc-f925b [1.640643567s] Feb 22 13:53:41.577: INFO: Created: latency-svc-mq9zd Feb 22 13:53:41.585: INFO: Got endpoints: latency-svc-mq9zd [1.627213309s] Feb 22 13:53:41.781: INFO: Created: latency-svc-k2rkf Feb 22 13:53:41.838: INFO: Got endpoints: latency-svc-k2rkf [1.713559711s] Feb 22 13:53:41.905: INFO: Created: latency-svc-n55bl Feb 22 13:53:41.915: INFO: Got endpoints: latency-svc-n55bl [1.704938954s] Feb 22 13:53:41.979: INFO: Created: latency-svc-vr4x7 Feb 22 13:53:41.985: INFO: Got endpoints: latency-svc-vr4x7 [1.727204779s] Feb 22 13:53:42.177: INFO: Created: latency-svc-5479r Feb 22 13:53:42.177: INFO: Created: latency-svc-c97w2 Feb 22 13:53:42.246: INFO: Got endpoints: latency-svc-c97w2 [1.733637425s] Feb 22 13:53:42.246: INFO: Got endpoints: latency-svc-5479r [1.526455426s] Feb 22 13:53:42.298: INFO: Created: latency-svc-wn7tl Feb 22 13:53:42.329: INFO: Got endpoints: latency-svc-wn7tl [1.536308336s] Feb 22 13:53:42.451: INFO: Created: latency-svc-rpz4v Feb 22 13:53:42.470: INFO: Got endpoints: latency-svc-rpz4v [1.583334242s] Feb 22 13:53:42.502: INFO: Created: latency-svc-n9lfb Feb 22 13:53:42.591: INFO: Got endpoints: latency-svc-n9lfb [1.552529307s] Feb 22 13:53:42.607: INFO: Created: latency-svc-vnkrg Feb 22 13:53:42.620: INFO: Got endpoints: latency-svc-vnkrg [1.539443894s] Feb 22 13:53:42.680: INFO: Created: latency-svc-tpcmq Feb 22 13:53:42.750: INFO: Got endpoints: latency-svc-tpcmq [1.559130001s] Feb 22 13:53:42.789: INFO: Created: latency-svc-2kf9t Feb 22 13:53:42.822: INFO: Created: latency-svc-whsv6 Feb 22 13:53:42.823: INFO: Got endpoints: latency-svc-2kf9t [1.568408429s] Feb 22 13:53:42.841: INFO: Got endpoints: latency-svc-whsv6 [1.46922463s] Feb 22 13:53:42.951: INFO: Created: latency-svc-q26lr Feb 22 13:53:42.962: INFO: Got endpoints: latency-svc-q26lr [1.516085516s] Feb 22 13:53:42.992: INFO: Created: latency-svc-t6hvv Feb 22 13:53:43.028: INFO: Got endpoints: latency-svc-t6hvv [1.508621921s] Feb 22 13:53:43.100: INFO: Created: latency-svc-flztw Feb 22 13:53:43.111: INFO: Got endpoints: latency-svc-flztw [1.52591153s] Feb 22 13:53:43.163: INFO: Created: latency-svc-g8vk4 Feb 22 13:53:43.288: INFO: Got endpoints: latency-svc-g8vk4 [1.449850007s] Feb 22 13:53:43.300: INFO: Created: latency-svc-br64n Feb 22 13:53:43.326: INFO: Got endpoints: latency-svc-br64n [1.410832942s] Feb 22 13:53:43.374: INFO: Created: latency-svc-rqkf8 Feb 22 13:53:43.381: INFO: Got endpoints: latency-svc-rqkf8 [1.395309911s] Feb 22 13:53:43.487: INFO: Created: latency-svc-r66ss Feb 22 13:53:43.540: INFO: Got endpoints: latency-svc-r66ss [213.076966ms] Feb 22 13:53:43.670: INFO: Created: latency-svc-vh5vg Feb 22 13:53:43.679: INFO: Got endpoints: latency-svc-vh5vg [1.432902473s] Feb 22 13:53:43.740: INFO: Created: latency-svc-b59cp Feb 22 13:53:43.745: INFO: Got endpoints: latency-svc-b59cp [1.498328169s] Feb 22 13:53:43.851: INFO: Created: latency-svc-rrp95 Feb 22 13:53:43.863: INFO: Got endpoints: latency-svc-rrp95 [1.532773468s] Feb 22 13:53:43.923: INFO: Created: latency-svc-6256p Feb 22 13:53:43.962: INFO: Got endpoints: latency-svc-6256p [1.491553692s] Feb 22 13:53:44.008: INFO: Created: latency-svc-fpwr6 Feb 22 13:53:44.009: INFO: Got endpoints: latency-svc-fpwr6 [1.416894442s] Feb 22 13:53:44.056: INFO: Created: latency-svc-lhhtm Feb 22 13:53:44.111: INFO: Got endpoints: latency-svc-lhhtm [1.490596823s] Feb 22 13:53:44.142: INFO: Created: latency-svc-xftf5 Feb 22 13:53:44.152: INFO: Got endpoints: latency-svc-xftf5 [1.401088696s] Feb 22 13:53:44.209: INFO: Created: latency-svc-2mxgl Feb 22 13:53:44.257: INFO: Got endpoints: latency-svc-2mxgl [1.433988963s] Feb 22 13:53:44.299: INFO: Created: latency-svc-mlrq7 Feb 22 13:53:44.299: INFO: Got endpoints: latency-svc-mlrq7 [1.457595939s] Feb 22 13:53:44.333: INFO: Created: latency-svc-jq69n Feb 22 13:53:44.337: INFO: Got endpoints: latency-svc-jq69n [1.375260974s] Feb 22 13:53:44.419: INFO: Created: latency-svc-m5fr5 Feb 22 13:53:44.423: INFO: Got endpoints: latency-svc-m5fr5 [1.394082514s] Feb 22 13:53:44.504: INFO: Created: latency-svc-hv8f7 Feb 22 13:53:44.505: INFO: Got endpoints: latency-svc-hv8f7 [1.393193402s] Feb 22 13:53:44.722: INFO: Created: latency-svc-5qsft Feb 22 13:53:44.733: INFO: Got endpoints: latency-svc-5qsft [1.444174079s] Feb 22 13:53:44.825: INFO: Created: latency-svc-n9rtq Feb 22 13:53:44.916: INFO: Got endpoints: latency-svc-n9rtq [1.535320637s] Feb 22 13:53:44.937: INFO: Created: latency-svc-zk65p Feb 22 13:53:44.971: INFO: Got endpoints: latency-svc-zk65p [1.431357034s] Feb 22 13:53:45.014: INFO: Created: latency-svc-m5hxb Feb 22 13:53:45.156: INFO: Got endpoints: latency-svc-m5hxb [1.476442862s] Feb 22 13:53:45.172: INFO: Created: latency-svc-djzzz Feb 22 13:53:45.182: INFO: Got endpoints: latency-svc-djzzz [1.437186421s] Feb 22 13:53:45.233: INFO: Created: latency-svc-xnpn4 Feb 22 13:53:45.395: INFO: Got endpoints: latency-svc-xnpn4 [1.531889769s] Feb 22 13:53:45.411: INFO: Created: latency-svc-8mdvw Feb 22 13:53:45.411: INFO: Got endpoints: latency-svc-8mdvw [1.448546243s] Feb 22 13:53:45.474: INFO: Created: latency-svc-25grq Feb 22 13:53:45.474: INFO: Got endpoints: latency-svc-25grq [1.464876409s] Feb 22 13:53:45.588: INFO: Created: latency-svc-sxzzx Feb 22 13:53:45.595: INFO: Got endpoints: latency-svc-sxzzx [1.483838437s] Feb 22 13:53:45.761: INFO: Created: latency-svc-scdtl Feb 22 13:53:45.766: INFO: Got endpoints: latency-svc-scdtl [1.613767867s] Feb 22 13:53:45.815: INFO: Created: latency-svc-7hn5z Feb 22 13:53:45.819: INFO: Got endpoints: latency-svc-7hn5z [1.561845141s] Feb 22 13:53:45.936: INFO: Created: latency-svc-pkvd6 Feb 22 13:53:45.948: INFO: Got endpoints: latency-svc-pkvd6 [1.648260809s] Feb 22 13:53:45.991: INFO: Created: latency-svc-bk9sf Feb 22 13:53:46.004: INFO: Got endpoints: latency-svc-bk9sf [1.666396436s] Feb 22 13:53:46.114: INFO: Created: latency-svc-sfpcs Feb 22 13:53:46.121: INFO: Got endpoints: latency-svc-sfpcs [1.698148667s] Feb 22 13:53:46.177: INFO: Created: latency-svc-sjmgv Feb 22 13:53:46.188: INFO: Got endpoints: latency-svc-sjmgv [1.683163062s] Feb 22 13:53:46.291: INFO: Created: latency-svc-2v275 Feb 22 13:53:46.298: INFO: Got endpoints: latency-svc-2v275 [1.5654573s] Feb 22 13:53:46.336: INFO: Created: latency-svc-5llcv Feb 22 13:53:46.343: INFO: Got endpoints: latency-svc-5llcv [1.426429618s] Feb 22 13:53:46.464: INFO: Created: latency-svc-8qk2k Feb 22 13:53:46.469: INFO: Got endpoints: latency-svc-8qk2k [1.49682652s] Feb 22 13:53:46.520: INFO: Created: latency-svc-6lmjr Feb 22 13:53:46.544: INFO: Got endpoints: latency-svc-6lmjr [1.387236948s] Feb 22 13:53:46.720: INFO: Created: latency-svc-7pxcj Feb 22 13:53:46.789: INFO: Got endpoints: latency-svc-7pxcj [1.606284031s] Feb 22 13:53:46.799: INFO: Created: latency-svc-rl2r6 Feb 22 13:53:46.888: INFO: Got endpoints: latency-svc-rl2r6 [1.492434347s] Feb 22 13:53:46.907: INFO: Created: latency-svc-ldnhs Feb 22 13:53:46.917: INFO: Got endpoints: latency-svc-ldnhs [1.505199771s] Feb 22 13:53:46.980: INFO: Created: latency-svc-zjz2x Feb 22 13:53:47.111: INFO: Got endpoints: latency-svc-zjz2x [1.636544148s] Feb 22 13:53:47.140: INFO: Created: latency-svc-82gqx Feb 22 13:53:47.166: INFO: Got endpoints: latency-svc-82gqx [1.570796071s] Feb 22 13:53:47.350: INFO: Created: latency-svc-pz4c6 Feb 22 13:53:47.360: INFO: Created: latency-svc-lcjjm Feb 22 13:53:47.375: INFO: Got endpoints: latency-svc-pz4c6 [1.60874868s] Feb 22 13:53:47.378: INFO: Got endpoints: latency-svc-lcjjm [1.558206105s] Feb 22 13:53:47.577: INFO: Created: latency-svc-qpvbr Feb 22 13:53:47.581: INFO: Got endpoints: latency-svc-qpvbr [1.633249528s] Feb 22 13:53:47.796: INFO: Created: latency-svc-9qrcd Feb 22 13:53:47.811: INFO: Got endpoints: latency-svc-9qrcd [1.806924034s] Feb 22 13:53:48.067: INFO: Created: latency-svc-qqljt Feb 22 13:53:48.067: INFO: Got endpoints: latency-svc-qqljt [1.946469202s] Feb 22 13:53:48.144: INFO: Created: latency-svc-8hwm7 Feb 22 13:53:48.304: INFO: Got endpoints: latency-svc-8hwm7 [2.115567243s] Feb 22 13:53:48.341: INFO: Created: latency-svc-bgclh Feb 22 13:53:48.348: INFO: Got endpoints: latency-svc-bgclh [2.049029107s] Feb 22 13:53:48.558: INFO: Created: latency-svc-bcnd8 Feb 22 13:53:48.564: INFO: Got endpoints: latency-svc-bcnd8 [2.220665392s] Feb 22 13:53:48.639: INFO: Created: latency-svc-wj5b7 Feb 22 13:53:48.853: INFO: Got endpoints: latency-svc-wj5b7 [2.383826218s] Feb 22 13:53:48.861: INFO: Created: latency-svc-65zhd Feb 22 13:53:48.881: INFO: Got endpoints: latency-svc-65zhd [2.336023179s] Feb 22 13:53:48.963: INFO: Created: latency-svc-crd5m Feb 22 13:53:49.086: INFO: Got endpoints: latency-svc-crd5m [2.296674022s] Feb 22 13:53:49.120: INFO: Created: latency-svc-7fdmp Feb 22 13:53:49.177: INFO: Got endpoints: latency-svc-7fdmp [2.28911676s] Feb 22 13:53:50.768: INFO: Created: latency-svc-h69d2 Feb 22 13:53:51.114: INFO: Got endpoints: latency-svc-h69d2 [4.197278826s] Feb 22 13:53:51.128: INFO: Created: latency-svc-9f8r2 Feb 22 13:53:51.160: INFO: Got endpoints: latency-svc-9f8r2 [4.049151326s] Feb 22 13:53:51.352: INFO: Created: latency-svc-ktqhb Feb 22 13:53:51.362: INFO: Got endpoints: latency-svc-ktqhb [4.195779146s] Feb 22 13:53:51.449: INFO: Created: latency-svc-jtd9d Feb 22 13:53:51.589: INFO: Got endpoints: latency-svc-jtd9d [4.214435077s] Feb 22 13:53:51.633: INFO: Created: latency-svc-xtqc9 Feb 22 13:53:51.633: INFO: Got endpoints: latency-svc-xtqc9 [4.255143422s] Feb 22 13:53:51.784: INFO: Created: latency-svc-x8lx8 Feb 22 13:53:51.874: INFO: Got endpoints: latency-svc-x8lx8 [4.292884476s] Feb 22 13:53:51.997: INFO: Created: latency-svc-s9587 Feb 22 13:53:52.040: INFO: Created: latency-svc-88db2 Feb 22 13:53:52.060: INFO: Got endpoints: latency-svc-s9587 [4.248751054s] Feb 22 13:53:52.183: INFO: Created: latency-svc-zdjqr Feb 22 13:53:52.183: INFO: Got endpoints: latency-svc-88db2 [4.115307043s] Feb 22 13:53:52.394: INFO: Got endpoints: latency-svc-zdjqr [4.089719045s] Feb 22 13:53:52.508: INFO: Created: latency-svc-vrh76 Feb 22 13:53:52.553: INFO: Got endpoints: latency-svc-vrh76 [4.205431023s] Feb 22 13:53:52.670: INFO: Created: latency-svc-rzzfp Feb 22 13:53:52.756: INFO: Got endpoints: latency-svc-rzzfp [4.19175537s] Feb 22 13:53:52.835: INFO: Created: latency-svc-xr78h Feb 22 13:53:52.847: INFO: Got endpoints: latency-svc-xr78h [3.993376932s] Feb 22 13:53:52.953: INFO: Created: latency-svc-lx5pb Feb 22 13:53:52.953: INFO: Got endpoints: latency-svc-lx5pb [4.071700198s] Feb 22 13:53:52.988: INFO: Created: latency-svc-rkgbx Feb 22 13:53:53.027: INFO: Got endpoints: latency-svc-rkgbx [3.940066582s] Feb 22 13:53:53.182: INFO: Created: latency-svc-tl528 Feb 22 13:53:53.196: INFO: Got endpoints: latency-svc-tl528 [4.018010036s] Feb 22 13:53:53.253: INFO: Created: latency-svc-r5c4p Feb 22 13:53:53.261: INFO: Got endpoints: latency-svc-r5c4p [2.1468682s] Feb 22 13:53:53.431: INFO: Created: latency-svc-b4kr8 Feb 22 13:53:53.433: INFO: Got endpoints: latency-svc-b4kr8 [2.272860173s] Feb 22 13:53:53.495: INFO: Created: latency-svc-45nrf Feb 22 13:53:53.498: INFO: Got endpoints: latency-svc-45nrf [2.135224434s] Feb 22 13:53:53.642: INFO: Created: latency-svc-tx57x Feb 22 13:53:53.651: INFO: Got endpoints: latency-svc-tx57x [2.060718728s] Feb 22 13:53:53.776: INFO: Created: latency-svc-ddjv7 Feb 22 13:53:53.829: INFO: Got endpoints: latency-svc-ddjv7 [2.195654283s] Feb 22 13:53:53.829: INFO: Created: latency-svc-xqxgq Feb 22 13:53:53.837: INFO: Got endpoints: latency-svc-xqxgq [1.962346796s] Feb 22 13:53:53.947: INFO: Created: latency-svc-d689g Feb 22 13:53:53.951: INFO: Got endpoints: latency-svc-d689g [1.890770776s] Feb 22 13:53:54.003: INFO: Created: latency-svc-5dthk Feb 22 13:53:54.021: INFO: Got endpoints: latency-svc-5dthk [1.837737928s] Feb 22 13:53:54.079: INFO: Created: latency-svc-k9clj Feb 22 13:53:54.090: INFO: Got endpoints: latency-svc-k9clj [1.695442115s] Feb 22 13:53:54.143: INFO: Created: latency-svc-g9b99 Feb 22 13:53:54.147: INFO: Got endpoints: latency-svc-g9b99 [1.593070286s] Feb 22 13:53:54.188: INFO: Created: latency-svc-s6r9j Feb 22 13:53:54.250: INFO: Got endpoints: latency-svc-s6r9j [1.493956975s] Feb 22 13:53:54.267: INFO: Created: latency-svc-2fk27 Feb 22 13:53:54.275: INFO: Got endpoints: latency-svc-2fk27 [1.428255705s] Feb 22 13:53:54.318: INFO: Created: latency-svc-t5kd4 Feb 22 13:53:54.403: INFO: Created: latency-svc-7c4qb Feb 22 13:53:54.403: INFO: Got endpoints: latency-svc-t5kd4 [1.450235943s] Feb 22 13:53:54.422: INFO: Got endpoints: latency-svc-7c4qb [1.394416248s] Feb 22 13:53:54.463: INFO: Created: latency-svc-v9pkf Feb 22 13:53:54.491: INFO: Got endpoints: latency-svc-v9pkf [1.294451895s] Feb 22 13:53:54.499: INFO: Created: latency-svc-4lpjj Feb 22 13:53:54.588: INFO: Got endpoints: latency-svc-4lpjj [1.326855576s] Feb 22 13:53:54.594: INFO: Created: latency-svc-znwx6 Feb 22 13:53:54.608: INFO: Got endpoints: latency-svc-znwx6 [1.17433445s] Feb 22 13:53:54.771: INFO: Created: latency-svc-4qbv8 Feb 22 13:53:54.780: INFO: Got endpoints: latency-svc-4qbv8 [1.282444505s] Feb 22 13:53:54.824: INFO: Created: latency-svc-pjdwq Feb 22 13:53:54.825: INFO: Got endpoints: latency-svc-pjdwq [1.174347078s] Feb 22 13:53:54.861: INFO: Created: latency-svc-tqv4n Feb 22 13:53:54.865: INFO: Got endpoints: latency-svc-tqv4n [1.035372349s] Feb 22 13:53:54.985: INFO: Created: latency-svc-9s6jz Feb 22 13:53:54.992: INFO: Got endpoints: latency-svc-9s6jz [1.154258632s] Feb 22 13:53:55.037: INFO: Created: latency-svc-jh5s2 Feb 22 13:53:55.041: INFO: Got endpoints: latency-svc-jh5s2 [1.089432161s] Feb 22 13:53:55.211: INFO: Created: latency-svc-lmf8p Feb 22 13:53:55.219: INFO: Got endpoints: latency-svc-lmf8p [1.197327688s] Feb 22 13:53:55.327: INFO: Created: latency-svc-q2fzg Feb 22 13:53:55.332: INFO: Got endpoints: latency-svc-q2fzg [1.242301594s] Feb 22 13:53:55.384: INFO: Created: latency-svc-wsblt Feb 22 13:53:55.386: INFO: Got endpoints: latency-svc-wsblt [1.239469885s] Feb 22 13:53:55.546: INFO: Created: latency-svc-mzglp Feb 22 13:53:55.557: INFO: Got endpoints: latency-svc-mzglp [1.30670001s] Feb 22 13:53:55.611: INFO: Created: latency-svc-7ldbc Feb 22 13:53:55.769: INFO: Created: latency-svc-br8lv Feb 22 13:53:55.770: INFO: Got endpoints: latency-svc-7ldbc [1.494249364s] Feb 22 13:53:55.798: INFO: Got endpoints: latency-svc-br8lv [1.395371753s] Feb 22 13:53:55.863: INFO: Created: latency-svc-gbktv Feb 22 13:53:55.911: INFO: Got endpoints: latency-svc-gbktv [1.489317719s] Feb 22 13:53:55.925: INFO: Created: latency-svc-kwpvq Feb 22 13:53:55.931: INFO: Got endpoints: latency-svc-kwpvq [1.439477945s] Feb 22 13:53:55.985: INFO: Created: latency-svc-9hv97 Feb 22 13:53:56.062: INFO: Got endpoints: latency-svc-9hv97 [1.473050857s] Feb 22 13:53:56.104: INFO: Created: latency-svc-n9xkx Feb 22 13:53:56.110: INFO: Got endpoints: latency-svc-n9xkx [1.502236365s] Feb 22 13:53:56.153: INFO: Created: latency-svc-qgzzs Feb 22 13:53:56.214: INFO: Got endpoints: latency-svc-qgzzs [1.433601904s] Feb 22 13:53:56.229: INFO: Created: latency-svc-2qp8q Feb 22 13:53:56.242: INFO: Got endpoints: latency-svc-2qp8q [1.416504037s] Feb 22 13:53:56.277: INFO: Created: latency-svc-5fwhv Feb 22 13:53:56.305: INFO: Got endpoints: latency-svc-5fwhv [1.439500796s] Feb 22 13:53:56.388: INFO: Created: latency-svc-7stk7 Feb 22 13:53:56.425: INFO: Got endpoints: latency-svc-7stk7 [1.432748153s] Feb 22 13:53:56.432: INFO: Created: latency-svc-mzhs2 Feb 22 13:53:56.448: INFO: Got endpoints: latency-svc-mzhs2 [1.40689223s] Feb 22 13:53:56.627: INFO: Created: latency-svc-jgktv Feb 22 13:53:56.695: INFO: Got endpoints: latency-svc-jgktv [1.475854146s] Feb 22 13:53:56.710: INFO: Created: latency-svc-cbwvl Feb 22 13:53:56.791: INFO: Got endpoints: latency-svc-cbwvl [1.458512898s] Feb 22 13:53:56.817: INFO: Created: latency-svc-s86tk Feb 22 13:53:56.865: INFO: Got endpoints: latency-svc-s86tk [1.478579122s] Feb 22 13:53:56.877: INFO: Created: latency-svc-mdcmg Feb 22 13:53:56.941: INFO: Got endpoints: latency-svc-mdcmg [1.383860509s] Feb 22 13:53:56.983: INFO: Created: latency-svc-8t8kd Feb 22 13:53:56.985: INFO: Got endpoints: latency-svc-8t8kd [1.215222023s] Feb 22 13:53:57.116: INFO: Created: latency-svc-4pqtw Feb 22 13:53:57.158: INFO: Got endpoints: latency-svc-4pqtw [1.359014971s] Feb 22 13:53:57.272: INFO: Created: latency-svc-qj6kj Feb 22 13:53:57.285: INFO: Got endpoints: latency-svc-qj6kj [1.373168639s] Feb 22 13:53:57.333: INFO: Created: latency-svc-pgn5r Feb 22 13:53:57.365: INFO: Got endpoints: latency-svc-pgn5r [1.433153398s] Feb 22 13:53:57.466: INFO: Created: latency-svc-h75cj Feb 22 13:53:57.603: INFO: Got endpoints: latency-svc-h75cj [1.540420658s] Feb 22 13:53:57.632: INFO: Created: latency-svc-vkt5r Feb 22 13:53:57.635: INFO: Got endpoints: latency-svc-vkt5r [1.524665088s] Feb 22 13:53:57.681: INFO: Created: latency-svc-27lhv Feb 22 13:53:57.750: INFO: Got endpoints: latency-svc-27lhv [1.535680057s] Feb 22 13:53:57.771: INFO: Created: latency-svc-tl64w Feb 22 13:53:57.800: INFO: Got endpoints: latency-svc-tl64w [1.558199519s] Feb 22 13:53:57.870: INFO: Created: latency-svc-8g2km Feb 22 13:53:57.947: INFO: Got endpoints: latency-svc-8g2km [1.641836847s] Feb 22 13:53:57.958: INFO: Created: latency-svc-7zpkf Feb 22 13:53:57.966: INFO: Got endpoints: latency-svc-7zpkf [1.54031439s] Feb 22 13:53:58.011: INFO: Created: latency-svc-kp9tn Feb 22 13:53:58.093: INFO: Got endpoints: latency-svc-kp9tn [1.645340334s] Feb 22 13:53:58.148: INFO: Created: latency-svc-gjvhp Feb 22 13:53:58.155: INFO: Got endpoints: latency-svc-gjvhp [1.459079833s] Feb 22 13:53:58.244: INFO: Created: latency-svc-d4q22 Feb 22 13:53:58.249: INFO: Got endpoints: latency-svc-d4q22 [1.457794679s] Feb 22 13:53:58.293: INFO: Created: latency-svc-l49x8 Feb 22 13:53:58.308: INFO: Got endpoints: latency-svc-l49x8 [1.442529358s] Feb 22 13:53:58.424: INFO: Created: latency-svc-r7ppq Feb 22 13:53:58.431: INFO: Got endpoints: latency-svc-r7ppq [1.489530031s] Feb 22 13:53:58.492: INFO: Created: latency-svc-xs6hr Feb 22 13:53:58.500: INFO: Got endpoints: latency-svc-xs6hr [1.51532219s] Feb 22 13:53:58.595: INFO: Created: latency-svc-zlb8z Feb 22 13:53:58.598: INFO: Got endpoints: latency-svc-zlb8z [1.440326758s] Feb 22 13:53:58.655: INFO: Created: latency-svc-v98sx Feb 22 13:53:58.676: INFO: Got endpoints: latency-svc-v98sx [1.390779716s] Feb 22 13:53:58.829: INFO: Created: latency-svc-pjrz6 Feb 22 13:53:58.838: INFO: Got endpoints: latency-svc-pjrz6 [1.473197032s] Feb 22 13:53:58.885: INFO: Created: latency-svc-69mrs Feb 22 13:53:58.891: INFO: Got endpoints: latency-svc-69mrs [1.288067967s] Feb 22 13:53:59.001: INFO: Created: latency-svc-gkvvh Feb 22 13:53:59.014: INFO: Got endpoints: latency-svc-gkvvh [1.378889345s] Feb 22 13:53:59.079: INFO: Created: latency-svc-8hmp4 Feb 22 13:53:59.103: INFO: Got endpoints: latency-svc-8hmp4 [1.351958604s] Feb 22 13:53:59.306: INFO: Created: latency-svc-95tb8 Feb 22 13:53:59.307: INFO: Got endpoints: latency-svc-95tb8 [1.506545003s] Feb 22 13:53:59.345: INFO: Created: latency-svc-zbs9w Feb 22 13:53:59.376: INFO: Got endpoints: latency-svc-zbs9w [1.427898394s] Feb 22 13:53:59.493: INFO: Created: latency-svc-kvgjz Feb 22 13:53:59.515: INFO: Got endpoints: latency-svc-kvgjz [1.549334352s] Feb 22 13:53:59.546: INFO: Created: latency-svc-w59rg Feb 22 13:53:59.592: INFO: Created: latency-svc-q7zqz Feb 22 13:53:59.596: INFO: Got endpoints: latency-svc-w59rg [1.502561957s] Feb 22 13:53:59.683: INFO: Got endpoints: latency-svc-q7zqz [1.528256947s] Feb 22 13:53:59.724: INFO: Created: latency-svc-6fv2d Feb 22 13:53:59.731: INFO: Got endpoints: latency-svc-6fv2d [1.481578894s] Feb 22 13:53:59.942: INFO: Created: latency-svc-dvnhx Feb 22 13:53:59.942: INFO: Got endpoints: latency-svc-dvnhx [1.633346545s] Feb 22 13:53:59.942: INFO: Latencies: [210.0959ms 213.076966ms 404.094618ms 506.411886ms 696.958608ms 728.011991ms 784.033083ms 896.202376ms 961.238907ms 1.035372349s 1.089432161s 1.112557628s 1.137765336s 1.154258632s 1.17433445s 1.174347078s 1.197327688s 1.215222023s 1.239469885s 1.242301594s 1.282444505s 1.287018034s 1.288067967s 1.294451895s 1.30670001s 1.326855576s 1.351958604s 1.359014971s 1.373168639s 1.375260974s 1.378889345s 1.383860509s 1.385551387s 1.386060693s 1.387236948s 1.390704453s 1.390779716s 1.393193402s 1.394082514s 1.394416248s 1.395309911s 1.395371753s 1.401088696s 1.40689223s 1.410832942s 1.416504037s 1.416894442s 1.426429618s 1.427898394s 1.428255705s 1.429652136s 1.431357034s 1.432748153s 1.432902473s 1.433153398s 1.433601904s 1.433988963s 1.437186421s 1.439477945s 1.439500796s 1.440326758s 1.442529358s 1.444174079s 1.447898144s 1.448546243s 1.449850007s 1.450235943s 1.457595939s 1.457794679s 1.458512898s 1.459079833s 1.464876409s 1.46922463s 1.469604639s 1.473050857s 1.473197032s 1.475854146s 1.476442862s 1.478579122s 1.481578894s 1.483838437s 1.489317719s 1.489530031s 1.490596823s 1.491553692s 1.492434347s 1.493956975s 1.494249364s 1.49682652s 1.498328169s 1.502236365s 1.502561957s 1.505199771s 1.506545003s 1.508621921s 1.511912775s 1.51532219s 1.516085516s 1.519641976s 1.521139528s 1.524665088s 1.52591153s 1.526455426s 1.528256947s 1.531889769s 1.532773468s 1.535320637s 1.535680057s 1.536308336s 1.539443894s 1.54031439s 1.540420658s 1.549334352s 1.552529307s 1.555181054s 1.558199519s 1.558206105s 1.558728395s 1.559130001s 1.561845141s 1.5654573s 1.568408429s 1.570796071s 1.58166527s 1.583334242s 1.593070286s 1.601639471s 1.601805611s 1.606284031s 1.60874868s 1.611927474s 1.613767867s 1.627213309s 1.632568559s 1.633249528s 1.633346545s 1.636544148s 1.640643567s 1.641836847s 1.643611276s 1.645340334s 1.647678791s 1.648260809s 1.65561993s 1.656926886s 1.663272021s 1.663798543s 1.666396436s 1.674574146s 1.678715055s 1.683163062s 1.693440694s 1.695224753s 1.695442115s 1.698148667s 1.704938954s 1.706669248s 1.713559711s 1.727204779s 1.73186445s 1.733637425s 1.747745272s 1.754614388s 1.763893376s 1.765958764s 1.806924034s 1.818736604s 1.837737928s 1.84842557s 1.861645624s 1.890770776s 1.946469202s 1.962346796s 2.049029107s 2.060718728s 2.115567243s 2.135224434s 2.1468682s 2.195654283s 2.220665392s 2.272860173s 2.28911676s 2.296674022s 2.336023179s 2.383826218s 3.940066582s 3.993376932s 4.018010036s 4.049151326s 4.071700198s 4.089719045s 4.115307043s 4.19175537s 4.195779146s 4.197278826s 4.205431023s 4.214435077s 4.248751054s 4.255143422s 4.292884476s] Feb 22 13:53:59.942: INFO: 50 %ile: 1.524665088s Feb 22 13:53:59.942: INFO: 90 %ile: 2.272860173s Feb 22 13:53:59.942: INFO: 99 %ile: 4.255143422s Feb 22 13:53:59.942: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 22 13:53:59.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-9488" for this suite. Feb 22 13:54:39.987: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 22 13:54:40.109: INFO: namespace svc-latency-9488 deletion completed in 40.15748107s • [SLOW TEST:73.144 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 22 13:54:40.110: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Feb 22 13:54:40.253: INFO: Waiting up to 5m0s for pod "downwardapi-volume-47eb8b95-60e0-4789-9eb3-a64f0eac6b0d" in namespace "projected-1138" to be "success or failure" Feb 22 13:54:40.281: INFO: Pod "downwardapi-volume-47eb8b95-60e0-4789-9eb3-a64f0eac6b0d": Phase="Pending", Reason="", readiness=false. Elapsed: 26.915729ms Feb 22 13:54:42.288: INFO: Pod "downwardapi-volume-47eb8b95-60e0-4789-9eb3-a64f0eac6b0d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034627217s Feb 22 13:54:44.296: INFO: Pod "downwardapi-volume-47eb8b95-60e0-4789-9eb3-a64f0eac6b0d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041942573s Feb 22 13:54:46.305: INFO: Pod "downwardapi-volume-47eb8b95-60e0-4789-9eb3-a64f0eac6b0d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.051237437s Feb 22 13:54:48.322: INFO: Pod "downwardapi-volume-47eb8b95-60e0-4789-9eb3-a64f0eac6b0d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.067848352s STEP: Saw pod success Feb 22 13:54:48.322: INFO: Pod "downwardapi-volume-47eb8b95-60e0-4789-9eb3-a64f0eac6b0d" satisfied condition "success or failure" Feb 22 13:54:48.328: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-47eb8b95-60e0-4789-9eb3-a64f0eac6b0d container client-container: STEP: delete the pod Feb 22 13:54:48.487: INFO: Waiting for pod downwardapi-volume-47eb8b95-60e0-4789-9eb3-a64f0eac6b0d to disappear Feb 22 13:54:48.497: INFO: Pod downwardapi-volume-47eb8b95-60e0-4789-9eb3-a64f0eac6b0d no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 22 13:54:48.497: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1138" for this suite. Feb 22 13:54:56.548: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 22 13:54:57.208: INFO: namespace projected-1138 deletion completed in 8.702522253s • [SLOW TEST:17.099 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 22 13:54:57.209: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the initial replication controller Feb 22 13:54:57.285: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3524' Feb 22 13:54:57.612: INFO: stderr: "" Feb 22 13:54:57.612: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Feb 22 13:54:57.613: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3524' Feb 22 13:54:57.777: INFO: stderr: "" Feb 22 13:54:57.777: INFO: stdout: "update-demo-nautilus-f9s2m update-demo-nautilus-p6p59 " Feb 22 13:54:57.777: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-f9s2m -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3524' Feb 22 13:54:57.900: INFO: stderr: "" Feb 22 13:54:57.900: INFO: stdout: "" Feb 22 13:54:57.900: INFO: update-demo-nautilus-f9s2m is created but not running Feb 22 13:55:02.901: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3524' Feb 22 13:55:04.386: INFO: stderr: "" Feb 22 13:55:04.386: INFO: stdout: "update-demo-nautilus-f9s2m update-demo-nautilus-p6p59 " Feb 22 13:55:04.387: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-f9s2m -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3524' Feb 22 13:55:05.118: INFO: stderr: "" Feb 22 13:55:05.119: INFO: stdout: "" Feb 22 13:55:05.119: INFO: update-demo-nautilus-f9s2m is created but not running Feb 22 13:55:10.119: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3524' Feb 22 13:55:10.284: INFO: stderr: "" Feb 22 13:55:10.284: INFO: stdout: "update-demo-nautilus-f9s2m update-demo-nautilus-p6p59 " Feb 22 13:55:10.284: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-f9s2m -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3524' Feb 22 13:55:10.458: INFO: stderr: "" Feb 22 13:55:10.458: INFO: stdout: "true" Feb 22 13:55:10.458: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-f9s2m -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3524' Feb 22 13:55:10.565: INFO: stderr: "" Feb 22 13:55:10.566: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 22 13:55:10.566: INFO: validating pod update-demo-nautilus-f9s2m Feb 22 13:55:10.579: INFO: got data: { "image": "nautilus.jpg" } Feb 22 13:55:10.579: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 22 13:55:10.579: INFO: update-demo-nautilus-f9s2m is verified up and running Feb 22 13:55:10.580: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-p6p59 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3524' Feb 22 13:55:10.663: INFO: stderr: "" Feb 22 13:55:10.663: INFO: stdout: "true" Feb 22 13:55:10.663: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-p6p59 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3524' Feb 22 13:55:10.740: INFO: stderr: "" Feb 22 13:55:10.740: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 22 13:55:10.740: INFO: validating pod update-demo-nautilus-p6p59 Feb 22 13:55:10.761: INFO: got data: { "image": "nautilus.jpg" } Feb 22 13:55:10.761: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 22 13:55:10.761: INFO: update-demo-nautilus-p6p59 is verified up and running STEP: rolling-update to new replication controller Feb 22 13:55:10.764: INFO: scanned /root for discovery docs: Feb 22 13:55:10.764: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-3524' Feb 22 13:55:43.581: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Feb 22 13:55:43.581: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. Feb 22 13:55:43.582: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3524' Feb 22 13:55:43.782: INFO: stderr: "" Feb 22 13:55:43.782: INFO: stdout: "update-demo-kitten-8j9nz update-demo-kitten-jndfk update-demo-nautilus-p6p59 " STEP: Replicas for name=update-demo: expected=2 actual=3 Feb 22 13:55:48.783: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3524' Feb 22 13:55:48.922: INFO: stderr: "" Feb 22 13:55:48.922: INFO: stdout: "update-demo-kitten-8j9nz update-demo-kitten-jndfk " Feb 22 13:55:48.922: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-8j9nz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3524' Feb 22 13:55:49.067: INFO: stderr: "" Feb 22 13:55:49.067: INFO: stdout: "true" Feb 22 13:55:49.068: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-8j9nz -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3524' Feb 22 13:55:49.192: INFO: stderr: "" Feb 22 13:55:49.192: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Feb 22 13:55:49.192: INFO: validating pod update-demo-kitten-8j9nz Feb 22 13:55:49.212: INFO: got data: { "image": "kitten.jpg" } Feb 22 13:55:49.212: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Feb 22 13:55:49.212: INFO: update-demo-kitten-8j9nz is verified up and running Feb 22 13:55:49.212: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-jndfk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3524' Feb 22 13:55:49.288: INFO: stderr: "" Feb 22 13:55:49.288: INFO: stdout: "true" Feb 22 13:55:49.288: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-jndfk -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3524' Feb 22 13:55:49.373: INFO: stderr: "" Feb 22 13:55:49.373: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Feb 22 13:55:49.373: INFO: validating pod update-demo-kitten-jndfk Feb 22 13:55:49.391: INFO: got data: { "image": "kitten.jpg" } Feb 22 13:55:49.392: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Feb 22 13:55:49.392: INFO: update-demo-kitten-jndfk is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 22 13:55:49.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3524" for this suite. Feb 22 13:56:13.424: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 22 13:56:13.552: INFO: namespace kubectl-3524 deletion completed in 24.155911628s • [SLOW TEST:76.344 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 22 13:56:13.553: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Feb 22 13:56:13.718: INFO: (0) /api/v1/nodes/iruya-node/proxy/logs/:
alternatives.log
alternatives.l... (200; 24.514456ms)
Feb 22 13:56:13.725: INFO: (1) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.674196ms)
Feb 22 13:56:13.733: INFO: (2) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.512065ms)
Feb 22 13:56:13.739: INFO: (3) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.984844ms)
Feb 22 13:56:13.744: INFO: (4) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.910527ms)
Feb 22 13:56:13.749: INFO: (5) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.077277ms)
Feb 22 13:56:13.761: INFO: (6) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 12.216523ms)
Feb 22 13:56:13.768: INFO: (7) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.014256ms)
Feb 22 13:56:13.777: INFO: (8) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.875249ms)
Feb 22 13:56:13.788: INFO: (9) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 10.245201ms)
Feb 22 13:56:13.833: INFO: (10) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 45.216236ms)
Feb 22 13:56:13.850: INFO: (11) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 16.773295ms)
Feb 22 13:56:13.866: INFO: (12) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 15.778ms)
Feb 22 13:56:13.879: INFO: (13) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 13.227549ms)
Feb 22 13:56:13.887: INFO: (14) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.975037ms)
Feb 22 13:56:13.896: INFO: (15) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.970207ms)
Feb 22 13:56:13.905: INFO: (16) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.929162ms)
Feb 22 13:56:13.913: INFO: (17) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.967383ms)
Feb 22 13:56:13.922: INFO: (18) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.124648ms)
Feb 22 13:56:13.932: INFO: (19) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.314442ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 13:56:13.932: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-2961" for this suite.
Feb 22 13:56:19.969: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 13:56:20.059: INFO: namespace proxy-2961 deletion completed in 6.117922825s

• [SLOW TEST:6.506 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy logs on node using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 13:56:20.061: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 22 13:56:20.224: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9bbed413-70bc-4efe-8e83-7f4f16e6bf94" in namespace "projected-6707" to be "success or failure"
Feb 22 13:56:20.284: INFO: Pod "downwardapi-volume-9bbed413-70bc-4efe-8e83-7f4f16e6bf94": Phase="Pending", Reason="", readiness=false. Elapsed: 59.418405ms
Feb 22 13:56:22.294: INFO: Pod "downwardapi-volume-9bbed413-70bc-4efe-8e83-7f4f16e6bf94": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069128144s
Feb 22 13:56:24.313: INFO: Pod "downwardapi-volume-9bbed413-70bc-4efe-8e83-7f4f16e6bf94": Phase="Pending", Reason="", readiness=false. Elapsed: 4.088242068s
Feb 22 13:56:26.322: INFO: Pod "downwardapi-volume-9bbed413-70bc-4efe-8e83-7f4f16e6bf94": Phase="Pending", Reason="", readiness=false. Elapsed: 6.096862801s
Feb 22 13:56:28.344: INFO: Pod "downwardapi-volume-9bbed413-70bc-4efe-8e83-7f4f16e6bf94": Phase="Pending", Reason="", readiness=false. Elapsed: 8.119502794s
Feb 22 13:56:30.353: INFO: Pod "downwardapi-volume-9bbed413-70bc-4efe-8e83-7f4f16e6bf94": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.128336389s
STEP: Saw pod success
Feb 22 13:56:30.353: INFO: Pod "downwardapi-volume-9bbed413-70bc-4efe-8e83-7f4f16e6bf94" satisfied condition "success or failure"
Feb 22 13:56:30.361: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-9bbed413-70bc-4efe-8e83-7f4f16e6bf94 container client-container: 
STEP: delete the pod
Feb 22 13:56:31.075: INFO: Waiting for pod downwardapi-volume-9bbed413-70bc-4efe-8e83-7f4f16e6bf94 to disappear
Feb 22 13:56:31.103: INFO: Pod downwardapi-volume-9bbed413-70bc-4efe-8e83-7f4f16e6bf94 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 13:56:31.104: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6707" for this suite.
Feb 22 13:56:39.162: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 13:56:39.282: INFO: namespace projected-6707 deletion completed in 8.156551615s

• [SLOW TEST:19.221 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 13:56:39.283: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W0222 13:57:03.725916       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 22 13:57:03.726: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 13:57:03.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-1625" for this suite.
Feb 22 13:57:15.867: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 13:57:15.978: INFO: namespace gc-1625 deletion completed in 12.237859405s

• [SLOW TEST:36.695 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 13:57:15.978: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating secret secrets-1688/secret-test-e5e7d53f-7cb4-4c8b-9b69-fbd45946b2b0
STEP: Creating a pod to test consume secrets
Feb 22 13:57:18.155: INFO: Waiting up to 5m0s for pod "pod-configmaps-f861e026-ea07-49ec-9225-0d5a945d4f2b" in namespace "secrets-1688" to be "success or failure"
Feb 22 13:57:19.281: INFO: Pod "pod-configmaps-f861e026-ea07-49ec-9225-0d5a945d4f2b": Phase="Pending", Reason="", readiness=false. Elapsed: 1.125917238s
Feb 22 13:57:23.202: INFO: Pod "pod-configmaps-f861e026-ea07-49ec-9225-0d5a945d4f2b": Phase="Pending", Reason="", readiness=false. Elapsed: 5.047040409s
Feb 22 13:57:25.212: INFO: Pod "pod-configmaps-f861e026-ea07-49ec-9225-0d5a945d4f2b": Phase="Pending", Reason="", readiness=false. Elapsed: 7.056694186s
Feb 22 13:57:27.220: INFO: Pod "pod-configmaps-f861e026-ea07-49ec-9225-0d5a945d4f2b": Phase="Pending", Reason="", readiness=false. Elapsed: 9.064371559s
Feb 22 13:57:29.229: INFO: Pod "pod-configmaps-f861e026-ea07-49ec-9225-0d5a945d4f2b": Phase="Pending", Reason="", readiness=false. Elapsed: 11.074013466s
Feb 22 13:57:33.616: INFO: Pod "pod-configmaps-f861e026-ea07-49ec-9225-0d5a945d4f2b": Phase="Pending", Reason="", readiness=false. Elapsed: 15.461091326s
Feb 22 13:57:35.632: INFO: Pod "pod-configmaps-f861e026-ea07-49ec-9225-0d5a945d4f2b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 17.476868302s
STEP: Saw pod success
Feb 22 13:57:35.632: INFO: Pod "pod-configmaps-f861e026-ea07-49ec-9225-0d5a945d4f2b" satisfied condition "success or failure"
Feb 22 13:57:35.639: INFO: Trying to get logs from node iruya-node pod pod-configmaps-f861e026-ea07-49ec-9225-0d5a945d4f2b container env-test: 
STEP: delete the pod
Feb 22 13:57:35.713: INFO: Waiting for pod pod-configmaps-f861e026-ea07-49ec-9225-0d5a945d4f2b to disappear
Feb 22 13:57:35.719: INFO: Pod pod-configmaps-f861e026-ea07-49ec-9225-0d5a945d4f2b no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 13:57:35.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1688" for this suite.
Feb 22 13:57:41.749: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 13:57:41.916: INFO: namespace secrets-1688 deletion completed in 6.189689603s

• [SLOW TEST:25.939 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 13:57:41.917: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
Feb 22 13:57:42.057: INFO: observed the pod list
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 13:58:06.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-4179" for this suite.
Feb 22 13:58:12.593: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 13:58:12.760: INFO: namespace pods-4179 deletion completed in 6.202119633s

• [SLOW TEST:30.842 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 13:58:12.760: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 22 13:58:12.840: INFO: Waiting up to 5m0s for pod "downwardapi-volume-75f18699-b6ee-48e8-ae41-cb26f4e4e438" in namespace "projected-9119" to be "success or failure"
Feb 22 13:58:12.856: INFO: Pod "downwardapi-volume-75f18699-b6ee-48e8-ae41-cb26f4e4e438": Phase="Pending", Reason="", readiness=false. Elapsed: 15.857809ms
Feb 22 13:58:14.869: INFO: Pod "downwardapi-volume-75f18699-b6ee-48e8-ae41-cb26f4e4e438": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028231944s
Feb 22 13:58:16.883: INFO: Pod "downwardapi-volume-75f18699-b6ee-48e8-ae41-cb26f4e4e438": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04263136s
Feb 22 13:58:19.210: INFO: Pod "downwardapi-volume-75f18699-b6ee-48e8-ae41-cb26f4e4e438": Phase="Pending", Reason="", readiness=false. Elapsed: 6.3694124s
Feb 22 13:58:21.228: INFO: Pod "downwardapi-volume-75f18699-b6ee-48e8-ae41-cb26f4e4e438": Phase="Pending", Reason="", readiness=false. Elapsed: 8.387773859s
Feb 22 13:58:24.449: INFO: Pod "downwardapi-volume-75f18699-b6ee-48e8-ae41-cb26f4e4e438": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.60883532s
STEP: Saw pod success
Feb 22 13:58:24.450: INFO: Pod "downwardapi-volume-75f18699-b6ee-48e8-ae41-cb26f4e4e438" satisfied condition "success or failure"
Feb 22 13:58:24.495: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-75f18699-b6ee-48e8-ae41-cb26f4e4e438 container client-container: 
STEP: delete the pod
Feb 22 13:58:25.991: INFO: Waiting for pod downwardapi-volume-75f18699-b6ee-48e8-ae41-cb26f4e4e438 to disappear
Feb 22 13:58:26.031: INFO: Pod downwardapi-volume-75f18699-b6ee-48e8-ae41-cb26f4e4e438 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 13:58:26.031: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9119" for this suite.
Feb 22 13:58:32.066: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 13:58:32.219: INFO: namespace projected-9119 deletion completed in 6.175782495s

• [SLOW TEST:19.459 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 13:58:32.219: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-map-1dcf2062-06b9-4c18-a777-50a7f2f2c77e
STEP: Creating a pod to test consume secrets
Feb 22 13:58:32.339: INFO: Waiting up to 5m0s for pod "pod-secrets-7881e4cf-4ba2-475d-b423-d4f564acc320" in namespace "secrets-3413" to be "success or failure"
Feb 22 13:58:32.357: INFO: Pod "pod-secrets-7881e4cf-4ba2-475d-b423-d4f564acc320": Phase="Pending", Reason="", readiness=false. Elapsed: 17.811819ms
Feb 22 13:58:34.366: INFO: Pod "pod-secrets-7881e4cf-4ba2-475d-b423-d4f564acc320": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026527163s
Feb 22 13:58:36.382: INFO: Pod "pod-secrets-7881e4cf-4ba2-475d-b423-d4f564acc320": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042995911s
Feb 22 13:58:38.391: INFO: Pod "pod-secrets-7881e4cf-4ba2-475d-b423-d4f564acc320": Phase="Pending", Reason="", readiness=false. Elapsed: 6.051477195s
Feb 22 13:58:40.403: INFO: Pod "pod-secrets-7881e4cf-4ba2-475d-b423-d4f564acc320": Phase="Pending", Reason="", readiness=false. Elapsed: 8.063531443s
Feb 22 13:58:42.411: INFO: Pod "pod-secrets-7881e4cf-4ba2-475d-b423-d4f564acc320": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.071889877s
STEP: Saw pod success
Feb 22 13:58:42.411: INFO: Pod "pod-secrets-7881e4cf-4ba2-475d-b423-d4f564acc320" satisfied condition "success or failure"
Feb 22 13:58:42.417: INFO: Trying to get logs from node iruya-node pod pod-secrets-7881e4cf-4ba2-475d-b423-d4f564acc320 container secret-volume-test: 
STEP: delete the pod
Feb 22 13:58:42.592: INFO: Waiting for pod pod-secrets-7881e4cf-4ba2-475d-b423-d4f564acc320 to disappear
Feb 22 13:58:42.645: INFO: Pod pod-secrets-7881e4cf-4ba2-475d-b423-d4f564acc320 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 13:58:42.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-3413" for this suite.
Feb 22 13:58:48.684: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 13:58:48.911: INFO: namespace secrets-3413 deletion completed in 6.255615574s

• [SLOW TEST:16.692 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 13:58:48.912: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 22 13:58:49.081: INFO: Create a RollingUpdate DaemonSet
Feb 22 13:58:49.086: INFO: Check that daemon pods launch on every node of the cluster
Feb 22 13:58:49.100: INFO: Number of nodes with available pods: 0
Feb 22 13:58:49.100: INFO: Node iruya-node is running more than one daemon pod
Feb 22 13:58:50.837: INFO: Number of nodes with available pods: 0
Feb 22 13:58:50.838: INFO: Node iruya-node is running more than one daemon pod
Feb 22 13:58:51.565: INFO: Number of nodes with available pods: 0
Feb 22 13:58:51.565: INFO: Node iruya-node is running more than one daemon pod
Feb 22 13:58:52.656: INFO: Number of nodes with available pods: 0
Feb 22 13:58:52.656: INFO: Node iruya-node is running more than one daemon pod
Feb 22 13:58:53.233: INFO: Number of nodes with available pods: 0
Feb 22 13:58:53.233: INFO: Node iruya-node is running more than one daemon pod
Feb 22 13:58:54.120: INFO: Number of nodes with available pods: 0
Feb 22 13:58:54.120: INFO: Node iruya-node is running more than one daemon pod
Feb 22 13:58:55.115: INFO: Number of nodes with available pods: 0
Feb 22 13:58:55.115: INFO: Node iruya-node is running more than one daemon pod
Feb 22 13:58:57.189: INFO: Number of nodes with available pods: 0
Feb 22 13:58:57.189: INFO: Node iruya-node is running more than one daemon pod
Feb 22 13:58:58.121: INFO: Number of nodes with available pods: 0
Feb 22 13:58:58.121: INFO: Node iruya-node is running more than one daemon pod
Feb 22 13:58:59.110: INFO: Number of nodes with available pods: 0
Feb 22 13:58:59.110: INFO: Node iruya-node is running more than one daemon pod
Feb 22 13:59:00.167: INFO: Number of nodes with available pods: 1
Feb 22 13:59:00.167: INFO: Node iruya-node is running more than one daemon pod
Feb 22 13:59:01.115: INFO: Number of nodes with available pods: 2
Feb 22 13:59:01.115: INFO: Number of running nodes: 2, number of available pods: 2
Feb 22 13:59:01.115: INFO: Update the DaemonSet to trigger a rollout
Feb 22 13:59:01.131: INFO: Updating DaemonSet daemon-set
Feb 22 13:59:08.177: INFO: Roll back the DaemonSet before rollout is complete
Feb 22 13:59:08.198: INFO: Updating DaemonSet daemon-set
Feb 22 13:59:08.198: INFO: Make sure DaemonSet rollback is complete
Feb 22 13:59:08.206: INFO: Wrong image for pod: daemon-set-tlw8l. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Feb 22 13:59:08.206: INFO: Pod daemon-set-tlw8l is not available
Feb 22 13:59:09.220: INFO: Wrong image for pod: daemon-set-tlw8l. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Feb 22 13:59:09.220: INFO: Pod daemon-set-tlw8l is not available
Feb 22 13:59:10.228: INFO: Wrong image for pod: daemon-set-tlw8l. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Feb 22 13:59:10.228: INFO: Pod daemon-set-tlw8l is not available
Feb 22 13:59:11.217: INFO: Wrong image for pod: daemon-set-tlw8l. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Feb 22 13:59:11.217: INFO: Pod daemon-set-tlw8l is not available
Feb 22 13:59:12.221: INFO: Wrong image for pod: daemon-set-tlw8l. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Feb 22 13:59:12.221: INFO: Pod daemon-set-tlw8l is not available
Feb 22 13:59:13.932: INFO: Wrong image for pod: daemon-set-tlw8l. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Feb 22 13:59:13.933: INFO: Pod daemon-set-tlw8l is not available
Feb 22 13:59:14.227: INFO: Wrong image for pod: daemon-set-tlw8l. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Feb 22 13:59:14.227: INFO: Pod daemon-set-tlw8l is not available
Feb 22 13:59:15.223: INFO: Pod daemon-set-dnqt9 is not available
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2751, will wait for the garbage collector to delete the pods
Feb 22 13:59:15.311: INFO: Deleting DaemonSet.extensions daemon-set took: 14.864874ms
Feb 22 13:59:15.912: INFO: Terminating DaemonSet.extensions daemon-set pods took: 601.111065ms
Feb 22 13:59:26.622: INFO: Number of nodes with available pods: 0
Feb 22 13:59:26.622: INFO: Number of running nodes: 0, number of available pods: 0
Feb 22 13:59:26.628: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2751/daemonsets","resourceVersion":"25331901"},"items":null}

Feb 22 13:59:26.651: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2751/pods","resourceVersion":"25331901"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 13:59:26.677: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-2751" for this suite.
Feb 22 13:59:32.727: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 13:59:32.842: INFO: namespace daemonsets-2751 deletion completed in 6.158108032s

• [SLOW TEST:43.929 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 13:59:32.843: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 22 13:59:32.999: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c05e125a-366b-4245-bb15-7dcfbb0a694b" in namespace "downward-api-758" to be "success or failure"
Feb 22 13:59:33.026: INFO: Pod "downwardapi-volume-c05e125a-366b-4245-bb15-7dcfbb0a694b": Phase="Pending", Reason="", readiness=false. Elapsed: 26.20241ms
Feb 22 13:59:35.039: INFO: Pod "downwardapi-volume-c05e125a-366b-4245-bb15-7dcfbb0a694b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038913734s
Feb 22 13:59:37.049: INFO: Pod "downwardapi-volume-c05e125a-366b-4245-bb15-7dcfbb0a694b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.049083791s
Feb 22 13:59:39.062: INFO: Pod "downwardapi-volume-c05e125a-366b-4245-bb15-7dcfbb0a694b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.062118368s
Feb 22 13:59:41.070: INFO: Pod "downwardapi-volume-c05e125a-366b-4245-bb15-7dcfbb0a694b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.070801809s
Feb 22 13:59:43.083: INFO: Pod "downwardapi-volume-c05e125a-366b-4245-bb15-7dcfbb0a694b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.082974432s
STEP: Saw pod success
Feb 22 13:59:43.083: INFO: Pod "downwardapi-volume-c05e125a-366b-4245-bb15-7dcfbb0a694b" satisfied condition "success or failure"
Feb 22 13:59:43.090: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-c05e125a-366b-4245-bb15-7dcfbb0a694b container client-container: 
STEP: delete the pod
Feb 22 13:59:43.225: INFO: Waiting for pod downwardapi-volume-c05e125a-366b-4245-bb15-7dcfbb0a694b to disappear
Feb 22 13:59:43.231: INFO: Pod downwardapi-volume-c05e125a-366b-4245-bb15-7dcfbb0a694b no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 13:59:43.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-758" for this suite.
Feb 22 13:59:49.259: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 13:59:49.406: INFO: namespace downward-api-758 deletion completed in 6.171011087s

• [SLOW TEST:16.564 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 13:59:49.408: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting the proxy server
Feb 22 13:59:49.513: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 13:59:49.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3505" for this suite.
Feb 22 13:59:55.670: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 13:59:55.799: INFO: namespace kubectl-3505 deletion completed in 6.174801716s

• [SLOW TEST:6.391 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support proxy with --port 0  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 13:59:55.800: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test use defaults
Feb 22 13:59:55.947: INFO: Waiting up to 5m0s for pod "client-containers-690f6f0a-1a0b-4996-a0d5-4567de5a9e11" in namespace "containers-4587" to be "success or failure"
Feb 22 13:59:55.969: INFO: Pod "client-containers-690f6f0a-1a0b-4996-a0d5-4567de5a9e11": Phase="Pending", Reason="", readiness=false. Elapsed: 22.179813ms
Feb 22 13:59:57.977: INFO: Pod "client-containers-690f6f0a-1a0b-4996-a0d5-4567de5a9e11": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02970007s
Feb 22 13:59:59.991: INFO: Pod "client-containers-690f6f0a-1a0b-4996-a0d5-4567de5a9e11": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04399479s
Feb 22 14:00:02.000: INFO: Pod "client-containers-690f6f0a-1a0b-4996-a0d5-4567de5a9e11": Phase="Pending", Reason="", readiness=false. Elapsed: 6.053385061s
Feb 22 14:00:04.011: INFO: Pod "client-containers-690f6f0a-1a0b-4996-a0d5-4567de5a9e11": Phase="Pending", Reason="", readiness=false. Elapsed: 8.06405448s
Feb 22 14:00:06.023: INFO: Pod "client-containers-690f6f0a-1a0b-4996-a0d5-4567de5a9e11": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.076197403s
STEP: Saw pod success
Feb 22 14:00:06.023: INFO: Pod "client-containers-690f6f0a-1a0b-4996-a0d5-4567de5a9e11" satisfied condition "success or failure"
Feb 22 14:00:06.030: INFO: Trying to get logs from node iruya-node pod client-containers-690f6f0a-1a0b-4996-a0d5-4567de5a9e11 container test-container: 
STEP: delete the pod
Feb 22 14:00:06.149: INFO: Waiting for pod client-containers-690f6f0a-1a0b-4996-a0d5-4567de5a9e11 to disappear
Feb 22 14:00:06.155: INFO: Pod client-containers-690f6f0a-1a0b-4996-a0d5-4567de5a9e11 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 14:00:06.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-4587" for this suite.
Feb 22 14:00:12.222: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 14:00:12.344: INFO: namespace containers-4587 deletion completed in 6.179316149s

• [SLOW TEST:16.544 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 14:00:12.345: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on tmpfs
Feb 22 14:00:12.489: INFO: Waiting up to 5m0s for pod "pod-840a557f-f007-486b-b954-d37026e6a966" in namespace "emptydir-6538" to be "success or failure"
Feb 22 14:00:12.501: INFO: Pod "pod-840a557f-f007-486b-b954-d37026e6a966": Phase="Pending", Reason="", readiness=false. Elapsed: 11.82931ms
Feb 22 14:00:14.514: INFO: Pod "pod-840a557f-f007-486b-b954-d37026e6a966": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024750658s
Feb 22 14:00:16.529: INFO: Pod "pod-840a557f-f007-486b-b954-d37026e6a966": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039034801s
Feb 22 14:00:18.543: INFO: Pod "pod-840a557f-f007-486b-b954-d37026e6a966": Phase="Pending", Reason="", readiness=false. Elapsed: 6.053228234s
Feb 22 14:00:20.559: INFO: Pod "pod-840a557f-f007-486b-b954-d37026e6a966": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.069093277s
STEP: Saw pod success
Feb 22 14:00:20.559: INFO: Pod "pod-840a557f-f007-486b-b954-d37026e6a966" satisfied condition "success or failure"
Feb 22 14:00:20.566: INFO: Trying to get logs from node iruya-node pod pod-840a557f-f007-486b-b954-d37026e6a966 container test-container: 
STEP: delete the pod
Feb 22 14:00:20.725: INFO: Waiting for pod pod-840a557f-f007-486b-b954-d37026e6a966 to disappear
Feb 22 14:00:20.732: INFO: Pod pod-840a557f-f007-486b-b954-d37026e6a966 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 14:00:20.732: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6538" for this suite.
Feb 22 14:00:26.771: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 14:00:26.898: INFO: namespace emptydir-6538 deletion completed in 6.159295454s

• [SLOW TEST:14.553 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 14:00:26.899: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 22 14:00:27.046: INFO: (0) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 13.339094ms)
Feb 22 14:00:27.057: INFO: (1) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 10.871384ms)
Feb 22 14:00:27.063: INFO: (2) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.244652ms)
Feb 22 14:00:27.071: INFO: (3) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.729946ms)
Feb 22 14:00:27.112: INFO: (4) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 40.224826ms)
Feb 22 14:00:27.117: INFO: (5) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.611318ms)
Feb 22 14:00:27.122: INFO: (6) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.232817ms)
Feb 22 14:00:27.130: INFO: (7) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.091609ms)
Feb 22 14:00:27.139: INFO: (8) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.113321ms)
Feb 22 14:00:27.156: INFO: (9) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 16.788626ms)
Feb 22 14:00:27.172: INFO: (10) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 15.923723ms)
Feb 22 14:00:27.183: INFO: (11) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 10.763356ms)
Feb 22 14:00:27.191: INFO: (12) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.747553ms)
Feb 22 14:00:27.196: INFO: (13) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.472685ms)
Feb 22 14:00:27.202: INFO: (14) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.86677ms)
Feb 22 14:00:27.211: INFO: (15) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.047464ms)
Feb 22 14:00:27.237: INFO: (16) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 26.188754ms)
Feb 22 14:00:27.243: INFO: (17) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.171944ms)
Feb 22 14:00:27.249: INFO: (18) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.38372ms)
Feb 22 14:00:27.256: INFO: (19) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.550008ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 14:00:27.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-902" for this suite.
Feb 22 14:00:33.301: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 14:00:33.512: INFO: namespace proxy-902 deletion completed in 6.251635978s

• [SLOW TEST:6.613 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 14:00:33.513: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 22 14:00:33.578: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 14:00:42.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-4227" for this suite.
Feb 22 14:01:24.129: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 14:01:24.239: INFO: namespace pods-4227 deletion completed in 42.152342744s

• [SLOW TEST:50.726 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 14:01:24.239: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-20deb539-18eb-4161-bb92-b06b224531d6
STEP: Creating a pod to test consume secrets
Feb 22 14:01:24.356: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-ca8f8ffd-adc0-4936-a82a-7d811bb1433a" in namespace "projected-2830" to be "success or failure"
Feb 22 14:01:24.372: INFO: Pod "pod-projected-secrets-ca8f8ffd-adc0-4936-a82a-7d811bb1433a": Phase="Pending", Reason="", readiness=false. Elapsed: 15.898492ms
Feb 22 14:01:26.381: INFO: Pod "pod-projected-secrets-ca8f8ffd-adc0-4936-a82a-7d811bb1433a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024318474s
Feb 22 14:01:28.391: INFO: Pod "pod-projected-secrets-ca8f8ffd-adc0-4936-a82a-7d811bb1433a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034100886s
Feb 22 14:01:30.404: INFO: Pod "pod-projected-secrets-ca8f8ffd-adc0-4936-a82a-7d811bb1433a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.047334983s
Feb 22 14:01:32.412: INFO: Pod "pod-projected-secrets-ca8f8ffd-adc0-4936-a82a-7d811bb1433a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.055755411s
STEP: Saw pod success
Feb 22 14:01:32.413: INFO: Pod "pod-projected-secrets-ca8f8ffd-adc0-4936-a82a-7d811bb1433a" satisfied condition "success or failure"
Feb 22 14:01:32.415: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-ca8f8ffd-adc0-4936-a82a-7d811bb1433a container projected-secret-volume-test: 
STEP: delete the pod
Feb 22 14:01:32.461: INFO: Waiting for pod pod-projected-secrets-ca8f8ffd-adc0-4936-a82a-7d811bb1433a to disappear
Feb 22 14:01:32.475: INFO: Pod pod-projected-secrets-ca8f8ffd-adc0-4936-a82a-7d811bb1433a no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 14:01:32.475: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2830" for this suite.
Feb 22 14:01:38.503: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 14:01:38.655: INFO: namespace projected-2830 deletion completed in 6.175645377s

• [SLOW TEST:14.416 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 14:01:38.656: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-d94ce173-96ab-459e-a104-f658ec5fc849
STEP: Creating a pod to test consume secrets
Feb 22 14:01:38.738: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-e885ef05-90fa-4ad9-a599-10cf22ef9cf6" in namespace "projected-4135" to be "success or failure"
Feb 22 14:01:38.758: INFO: Pod "pod-projected-secrets-e885ef05-90fa-4ad9-a599-10cf22ef9cf6": Phase="Pending", Reason="", readiness=false. Elapsed: 19.866072ms
Feb 22 14:01:40.779: INFO: Pod "pod-projected-secrets-e885ef05-90fa-4ad9-a599-10cf22ef9cf6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040702107s
Feb 22 14:01:42.788: INFO: Pod "pod-projected-secrets-e885ef05-90fa-4ad9-a599-10cf22ef9cf6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.049401652s
Feb 22 14:01:44.802: INFO: Pod "pod-projected-secrets-e885ef05-90fa-4ad9-a599-10cf22ef9cf6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.063484958s
Feb 22 14:01:46.809: INFO: Pod "pod-projected-secrets-e885ef05-90fa-4ad9-a599-10cf22ef9cf6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.071005706s
STEP: Saw pod success
Feb 22 14:01:46.810: INFO: Pod "pod-projected-secrets-e885ef05-90fa-4ad9-a599-10cf22ef9cf6" satisfied condition "success or failure"
Feb 22 14:01:46.814: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-e885ef05-90fa-4ad9-a599-10cf22ef9cf6 container projected-secret-volume-test: 
STEP: delete the pod
Feb 22 14:01:49.540: INFO: Waiting for pod pod-projected-secrets-e885ef05-90fa-4ad9-a599-10cf22ef9cf6 to disappear
Feb 22 14:01:49.556: INFO: Pod pod-projected-secrets-e885ef05-90fa-4ad9-a599-10cf22ef9cf6 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 14:01:49.556: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4135" for this suite.
Feb 22 14:01:55.600: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 14:01:55.922: INFO: namespace projected-4135 deletion completed in 6.353551926s

• [SLOW TEST:17.266 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 14:01:55.922: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-789300b8-6fbc-41b5-9fd0-f91e97d024da
STEP: Creating a pod to test consume secrets
Feb 22 14:01:56.053: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-c2c1567a-cc93-45ed-bdfd-be659e6be503" in namespace "projected-1072" to be "success or failure"
Feb 22 14:01:56.063: INFO: Pod "pod-projected-secrets-c2c1567a-cc93-45ed-bdfd-be659e6be503": Phase="Pending", Reason="", readiness=false. Elapsed: 9.819622ms
Feb 22 14:01:58.072: INFO: Pod "pod-projected-secrets-c2c1567a-cc93-45ed-bdfd-be659e6be503": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019241835s
Feb 22 14:02:00.088: INFO: Pod "pod-projected-secrets-c2c1567a-cc93-45ed-bdfd-be659e6be503": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034783935s
Feb 22 14:02:02.099: INFO: Pod "pod-projected-secrets-c2c1567a-cc93-45ed-bdfd-be659e6be503": Phase="Pending", Reason="", readiness=false. Elapsed: 6.045921139s
Feb 22 14:02:04.106: INFO: Pod "pod-projected-secrets-c2c1567a-cc93-45ed-bdfd-be659e6be503": Phase="Pending", Reason="", readiness=false. Elapsed: 8.05267463s
Feb 22 14:02:06.118: INFO: Pod "pod-projected-secrets-c2c1567a-cc93-45ed-bdfd-be659e6be503": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.064732634s
STEP: Saw pod success
Feb 22 14:02:06.118: INFO: Pod "pod-projected-secrets-c2c1567a-cc93-45ed-bdfd-be659e6be503" satisfied condition "success or failure"
Feb 22 14:02:06.124: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-c2c1567a-cc93-45ed-bdfd-be659e6be503 container projected-secret-volume-test: 
STEP: delete the pod
Feb 22 14:02:06.174: INFO: Waiting for pod pod-projected-secrets-c2c1567a-cc93-45ed-bdfd-be659e6be503 to disappear
Feb 22 14:02:06.183: INFO: Pod pod-projected-secrets-c2c1567a-cc93-45ed-bdfd-be659e6be503 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 14:02:06.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1072" for this suite.
Feb 22 14:02:12.216: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 14:02:12.337: INFO: namespace projected-1072 deletion completed in 6.143812169s

• [SLOW TEST:16.415 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl version 
  should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 14:02:12.338: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 22 14:02:12.490: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version'
Feb 22 14:02:12.614: INFO: stderr: ""
Feb 22 14:02:12.614: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.7\", GitCommit:\"6c143d35bb11d74970e7bc0b6c45b6bfdffc0bd4\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T16:55:20Z\", GoVersion:\"go1.12.14\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.1\", GitCommit:\"4485c6f18cee9a5d3c3b4e523bd27972b1b53892\", GitTreeState:\"clean\", BuildDate:\"2019-07-18T09:09:21Z\", GoVersion:\"go1.12.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 14:02:12.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-135" for this suite.
Feb 22 14:02:18.657: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 14:02:18.790: INFO: namespace kubectl-135 deletion completed in 6.161692003s

• [SLOW TEST:6.452 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl version
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check is all data is printed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period 
  should be submitted and removed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 14:02:18.790: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Delete Grace Period
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:47
[It] should be submitted and removed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: setting up selector
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
Feb 22 14:02:27.003: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0'
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
Feb 22 14:02:37.178: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 14:02:37.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-3026" for this suite.
Feb 22 14:02:43.244: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 14:02:43.384: INFO: namespace pods-3026 deletion completed in 6.192914556s

• [SLOW TEST:24.594 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  [k8s.io] Delete Grace Period
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be submitted and removed [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 14:02:43.385: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Feb 22 14:02:43.453: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 14:02:56.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-974" for this suite.
Feb 22 14:03:02.153: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 14:03:02.410: INFO: namespace init-container-974 deletion completed in 6.301383713s

• [SLOW TEST:19.026 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 14:03:02.411: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 14:03:12.612: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-4015" for this suite.
Feb 22 14:03:58.651: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 14:03:58.730: INFO: namespace kubelet-test-4015 deletion completed in 46.107852056s

• [SLOW TEST:56.319 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40
    should print the output to logs [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 14:03:58.731: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod liveness-15cc73e8-c92f-4c5b-8386-3be3f92fc4fe in namespace container-probe-7306
Feb 22 14:04:06.853: INFO: Started pod liveness-15cc73e8-c92f-4c5b-8386-3be3f92fc4fe in namespace container-probe-7306
STEP: checking the pod's current state and verifying that restartCount is present
Feb 22 14:04:06.866: INFO: Initial restart count of pod liveness-15cc73e8-c92f-4c5b-8386-3be3f92fc4fe is 0
Feb 22 14:04:29.050: INFO: Restart count of pod container-probe-7306/liveness-15cc73e8-c92f-4c5b-8386-3be3f92fc4fe is now 1 (22.183889316s elapsed)
Feb 22 14:04:51.305: INFO: Restart count of pod container-probe-7306/liveness-15cc73e8-c92f-4c5b-8386-3be3f92fc4fe is now 2 (44.439434819s elapsed)
Feb 22 14:05:11.456: INFO: Restart count of pod container-probe-7306/liveness-15cc73e8-c92f-4c5b-8386-3be3f92fc4fe is now 3 (1m4.590485001s elapsed)
Feb 22 14:05:31.557: INFO: Restart count of pod container-probe-7306/liveness-15cc73e8-c92f-4c5b-8386-3be3f92fc4fe is now 4 (1m24.691346659s elapsed)
Feb 22 14:06:32.290: INFO: Restart count of pod container-probe-7306/liveness-15cc73e8-c92f-4c5b-8386-3be3f92fc4fe is now 5 (2m25.424035537s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 14:06:32.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-7306" for this suite.
Feb 22 14:06:38.412: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 14:06:38.598: INFO: namespace container-probe-7306 deletion completed in 6.235237372s

• [SLOW TEST:159.867 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 14:06:38.598: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name secret-emptykey-test-0cc423c0-82ca-48f1-b554-85c2b375f61f
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 14:06:38.732: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-6447" for this suite.
Feb 22 14:06:44.760: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 14:06:44.961: INFO: namespace secrets-6447 deletion completed in 6.221591218s

• [SLOW TEST:6.362 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 14:06:44.962: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-0a5c60ef-a237-4de9-9bef-1d7e682ffa74
STEP: Creating a pod to test consume configMaps
Feb 22 14:06:45.086: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-d29a8c45-8f0b-4781-8d15-32bd4de7d277" in namespace "projected-9999" to be "success or failure"
Feb 22 14:06:45.101: INFO: Pod "pod-projected-configmaps-d29a8c45-8f0b-4781-8d15-32bd4de7d277": Phase="Pending", Reason="", readiness=false. Elapsed: 14.101274ms
Feb 22 14:06:47.128: INFO: Pod "pod-projected-configmaps-d29a8c45-8f0b-4781-8d15-32bd4de7d277": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041453722s
Feb 22 14:06:49.145: INFO: Pod "pod-projected-configmaps-d29a8c45-8f0b-4781-8d15-32bd4de7d277": Phase="Pending", Reason="", readiness=false. Elapsed: 4.058689212s
Feb 22 14:06:51.153: INFO: Pod "pod-projected-configmaps-d29a8c45-8f0b-4781-8d15-32bd4de7d277": Phase="Pending", Reason="", readiness=false. Elapsed: 6.066749998s
Feb 22 14:06:53.162: INFO: Pod "pod-projected-configmaps-d29a8c45-8f0b-4781-8d15-32bd4de7d277": Phase="Pending", Reason="", readiness=false. Elapsed: 8.075803054s
Feb 22 14:06:55.173: INFO: Pod "pod-projected-configmaps-d29a8c45-8f0b-4781-8d15-32bd4de7d277": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.086801543s
STEP: Saw pod success
Feb 22 14:06:55.173: INFO: Pod "pod-projected-configmaps-d29a8c45-8f0b-4781-8d15-32bd4de7d277" satisfied condition "success or failure"
Feb 22 14:06:55.177: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-d29a8c45-8f0b-4781-8d15-32bd4de7d277 container projected-configmap-volume-test: 
STEP: delete the pod
Feb 22 14:06:55.288: INFO: Waiting for pod pod-projected-configmaps-d29a8c45-8f0b-4781-8d15-32bd4de7d277 to disappear
Feb 22 14:06:55.303: INFO: Pod pod-projected-configmaps-d29a8c45-8f0b-4781-8d15-32bd4de7d277 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 14:06:55.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9999" for this suite.
Feb 22 14:07:01.340: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 14:07:01.556: INFO: namespace projected-9999 deletion completed in 6.247760317s

• [SLOW TEST:16.595 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 14:07:01.557: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on node default medium
Feb 22 14:07:01.670: INFO: Waiting up to 5m0s for pod "pod-fe5bc6a0-b63f-4f59-ba0f-9e41ebe56927" in namespace "emptydir-7682" to be "success or failure"
Feb 22 14:07:01.674: INFO: Pod "pod-fe5bc6a0-b63f-4f59-ba0f-9e41ebe56927": Phase="Pending", Reason="", readiness=false. Elapsed: 3.942981ms
Feb 22 14:07:03.719: INFO: Pod "pod-fe5bc6a0-b63f-4f59-ba0f-9e41ebe56927": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04933501s
Feb 22 14:07:05.726: INFO: Pod "pod-fe5bc6a0-b63f-4f59-ba0f-9e41ebe56927": Phase="Pending", Reason="", readiness=false. Elapsed: 4.056356581s
Feb 22 14:07:08.099: INFO: Pod "pod-fe5bc6a0-b63f-4f59-ba0f-9e41ebe56927": Phase="Pending", Reason="", readiness=false. Elapsed: 6.429238483s
Feb 22 14:07:10.129: INFO: Pod "pod-fe5bc6a0-b63f-4f59-ba0f-9e41ebe56927": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.459015123s
STEP: Saw pod success
Feb 22 14:07:10.129: INFO: Pod "pod-fe5bc6a0-b63f-4f59-ba0f-9e41ebe56927" satisfied condition "success or failure"
Feb 22 14:07:10.145: INFO: Trying to get logs from node iruya-node pod pod-fe5bc6a0-b63f-4f59-ba0f-9e41ebe56927 container test-container: 
STEP: delete the pod
Feb 22 14:07:10.215: INFO: Waiting for pod pod-fe5bc6a0-b63f-4f59-ba0f-9e41ebe56927 to disappear
Feb 22 14:07:10.223: INFO: Pod pod-fe5bc6a0-b63f-4f59-ba0f-9e41ebe56927 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 14:07:10.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7682" for this suite.
Feb 22 14:07:16.341: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 14:07:16.574: INFO: namespace emptydir-7682 deletion completed in 6.339678434s

• [SLOW TEST:15.018 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 14:07:16.575: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 22 14:07:16.666: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1bd3e50f-0c0c-4b0d-a2f1-bc9cbfdd2725" in namespace "projected-9440" to be "success or failure"
Feb 22 14:07:16.683: INFO: Pod "downwardapi-volume-1bd3e50f-0c0c-4b0d-a2f1-bc9cbfdd2725": Phase="Pending", Reason="", readiness=false. Elapsed: 17.27755ms
Feb 22 14:07:18.692: INFO: Pod "downwardapi-volume-1bd3e50f-0c0c-4b0d-a2f1-bc9cbfdd2725": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026151279s
Feb 22 14:07:20.700: INFO: Pod "downwardapi-volume-1bd3e50f-0c0c-4b0d-a2f1-bc9cbfdd2725": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033426375s
Feb 22 14:07:22.714: INFO: Pod "downwardapi-volume-1bd3e50f-0c0c-4b0d-a2f1-bc9cbfdd2725": Phase="Pending", Reason="", readiness=false. Elapsed: 6.047739854s
Feb 22 14:07:24.728: INFO: Pod "downwardapi-volume-1bd3e50f-0c0c-4b0d-a2f1-bc9cbfdd2725": Phase="Pending", Reason="", readiness=false. Elapsed: 8.062172354s
Feb 22 14:07:26.739: INFO: Pod "downwardapi-volume-1bd3e50f-0c0c-4b0d-a2f1-bc9cbfdd2725": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.072968605s
STEP: Saw pod success
Feb 22 14:07:26.739: INFO: Pod "downwardapi-volume-1bd3e50f-0c0c-4b0d-a2f1-bc9cbfdd2725" satisfied condition "success or failure"
Feb 22 14:07:26.744: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-1bd3e50f-0c0c-4b0d-a2f1-bc9cbfdd2725 container client-container: 
STEP: delete the pod
Feb 22 14:07:26.866: INFO: Waiting for pod downwardapi-volume-1bd3e50f-0c0c-4b0d-a2f1-bc9cbfdd2725 to disappear
Feb 22 14:07:26.877: INFO: Pod downwardapi-volume-1bd3e50f-0c0c-4b0d-a2f1-bc9cbfdd2725 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 14:07:26.877: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9440" for this suite.
Feb 22 14:07:32.918: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 14:07:33.035: INFO: namespace projected-9440 deletion completed in 6.149343309s

• [SLOW TEST:16.460 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-apps] ReplicationController 
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 14:07:33.036: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 22 14:07:33.167: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace
STEP: Creating rc "condition-test" that asks for more than the allowed pod quota
STEP: Checking rc "condition-test" has the desired failure condition set
STEP: Scaling down rc "condition-test" to satisfy pod quota
Feb 22 14:07:37.328: INFO: Updating replication controller "condition-test"
STEP: Checking rc "condition-test" has no failure condition set
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 14:07:37.481: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-8618" for this suite.
Feb 22 14:07:45.563: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 14:07:45.652: INFO: namespace replication-controller-8618 deletion completed in 8.164277995s

• [SLOW TEST:12.616 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 14:07:45.652: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Feb 22 14:07:57.878: INFO: Expected: &{} to match Container's Termination Message:  --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 14:07:57.952: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-3366" for this suite.
Feb 22 14:08:04.177: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 14:08:04.271: INFO: namespace container-runtime-3366 deletion completed in 6.14913698s

• [SLOW TEST:18.619 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 14:08:04.272: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 22 14:08:04.439: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2fbd4316-9d2e-4a23-85bc-9d47112dafd9" in namespace "downward-api-3681" to be "success or failure"
Feb 22 14:08:04.447: INFO: Pod "downwardapi-volume-2fbd4316-9d2e-4a23-85bc-9d47112dafd9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.114924ms
Feb 22 14:08:06.460: INFO: Pod "downwardapi-volume-2fbd4316-9d2e-4a23-85bc-9d47112dafd9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021058579s
Feb 22 14:08:08.472: INFO: Pod "downwardapi-volume-2fbd4316-9d2e-4a23-85bc-9d47112dafd9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.0331808s
Feb 22 14:08:10.496: INFO: Pod "downwardapi-volume-2fbd4316-9d2e-4a23-85bc-9d47112dafd9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.057112561s
Feb 22 14:08:12.523: INFO: Pod "downwardapi-volume-2fbd4316-9d2e-4a23-85bc-9d47112dafd9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.083389375s
STEP: Saw pod success
Feb 22 14:08:12.523: INFO: Pod "downwardapi-volume-2fbd4316-9d2e-4a23-85bc-9d47112dafd9" satisfied condition "success or failure"
Feb 22 14:08:12.538: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-2fbd4316-9d2e-4a23-85bc-9d47112dafd9 container client-container: 
STEP: delete the pod
Feb 22 14:08:12.619: INFO: Waiting for pod downwardapi-volume-2fbd4316-9d2e-4a23-85bc-9d47112dafd9 to disappear
Feb 22 14:08:12.678: INFO: Pod downwardapi-volume-2fbd4316-9d2e-4a23-85bc-9d47112dafd9 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 14:08:12.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3681" for this suite.
Feb 22 14:08:18.712: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 14:08:18.808: INFO: namespace downward-api-3681 deletion completed in 6.119937348s

• [SLOW TEST:14.536 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 14:08:18.808: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-647
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a new StatefulSet
Feb 22 14:08:19.002: INFO: Found 0 stateful pods, waiting for 3
Feb 22 14:08:29.020: INFO: Found 2 stateful pods, waiting for 3
Feb 22 14:08:39.012: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 22 14:08:39.012: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 22 14:08:39.012: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb 22 14:08:49.011: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 22 14:08:49.011: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 22 14:08:49.011: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Feb 22 14:08:49.020: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-647 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 22 14:08:51.634: INFO: stderr: "I0222 14:08:51.202956    1205 log.go:172] (0xc00012abb0) (0xc000702640) Create stream\nI0222 14:08:51.203055    1205 log.go:172] (0xc00012abb0) (0xc000702640) Stream added, broadcasting: 1\nI0222 14:08:51.208913    1205 log.go:172] (0xc00012abb0) Reply frame received for 1\nI0222 14:08:51.208962    1205 log.go:172] (0xc00012abb0) (0xc0005b0320) Create stream\nI0222 14:08:51.208975    1205 log.go:172] (0xc00012abb0) (0xc0005b0320) Stream added, broadcasting: 3\nI0222 14:08:51.211154    1205 log.go:172] (0xc00012abb0) Reply frame received for 3\nI0222 14:08:51.211199    1205 log.go:172] (0xc00012abb0) (0xc000350000) Create stream\nI0222 14:08:51.211213    1205 log.go:172] (0xc00012abb0) (0xc000350000) Stream added, broadcasting: 5\nI0222 14:08:51.213382    1205 log.go:172] (0xc00012abb0) Reply frame received for 5\nI0222 14:08:51.425358    1205 log.go:172] (0xc00012abb0) Data frame received for 5\nI0222 14:08:51.425397    1205 log.go:172] (0xc000350000) (5) Data frame handling\nI0222 14:08:51.425418    1205 log.go:172] (0xc000350000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0222 14:08:51.504606    1205 log.go:172] (0xc00012abb0) Data frame received for 3\nI0222 14:08:51.504637    1205 log.go:172] (0xc0005b0320) (3) Data frame handling\nI0222 14:08:51.504657    1205 log.go:172] (0xc0005b0320) (3) Data frame sent\nI0222 14:08:51.624779    1205 log.go:172] (0xc00012abb0) Data frame received for 1\nI0222 14:08:51.624893    1205 log.go:172] (0xc00012abb0) (0xc000350000) Stream removed, broadcasting: 5\nI0222 14:08:51.624953    1205 log.go:172] (0xc000702640) (1) Data frame handling\nI0222 14:08:51.624977    1205 log.go:172] (0xc000702640) (1) Data frame sent\nI0222 14:08:51.625213    1205 log.go:172] (0xc00012abb0) (0xc0005b0320) Stream removed, broadcasting: 3\nI0222 14:08:51.625349    1205 log.go:172] (0xc00012abb0) (0xc000702640) Stream removed, broadcasting: 1\nI0222 14:08:51.625424    1205 log.go:172] (0xc00012abb0) Go away received\nI0222 14:08:51.626179    1205 log.go:172] (0xc00012abb0) (0xc000702640) Stream removed, broadcasting: 1\nI0222 14:08:51.626194    1205 log.go:172] (0xc00012abb0) (0xc0005b0320) Stream removed, broadcasting: 3\nI0222 14:08:51.626219    1205 log.go:172] (0xc00012abb0) (0xc000350000) Stream removed, broadcasting: 5\n"
Feb 22 14:08:51.635: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 22 14:08:51.635: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Feb 22 14:09:01.714: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Feb 22 14:09:11.754: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-647 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 22 14:09:12.203: INFO: stderr: "I0222 14:09:11.998565    1232 log.go:172] (0xc0007cc0b0) (0xc000828140) Create stream\nI0222 14:09:11.998840    1232 log.go:172] (0xc0007cc0b0) (0xc000828140) Stream added, broadcasting: 1\nI0222 14:09:12.001687    1232 log.go:172] (0xc0007cc0b0) Reply frame received for 1\nI0222 14:09:12.001718    1232 log.go:172] (0xc0007cc0b0) (0xc0005c6320) Create stream\nI0222 14:09:12.001728    1232 log.go:172] (0xc0007cc0b0) (0xc0005c6320) Stream added, broadcasting: 3\nI0222 14:09:12.003314    1232 log.go:172] (0xc0007cc0b0) Reply frame received for 3\nI0222 14:09:12.003339    1232 log.go:172] (0xc0007cc0b0) (0xc000828280) Create stream\nI0222 14:09:12.003345    1232 log.go:172] (0xc0007cc0b0) (0xc000828280) Stream added, broadcasting: 5\nI0222 14:09:12.004349    1232 log.go:172] (0xc0007cc0b0) Reply frame received for 5\nI0222 14:09:12.089932    1232 log.go:172] (0xc0007cc0b0) Data frame received for 3\nI0222 14:09:12.089994    1232 log.go:172] (0xc0005c6320) (3) Data frame handling\nI0222 14:09:12.090008    1232 log.go:172] (0xc0005c6320) (3) Data frame sent\nI0222 14:09:12.090043    1232 log.go:172] (0xc0007cc0b0) Data frame received for 5\nI0222 14:09:12.090050    1232 log.go:172] (0xc000828280) (5) Data frame handling\nI0222 14:09:12.090061    1232 log.go:172] (0xc000828280) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0222 14:09:12.194296    1232 log.go:172] (0xc0007cc0b0) Data frame received for 1\nI0222 14:09:12.194434    1232 log.go:172] (0xc000828140) (1) Data frame handling\nI0222 14:09:12.194467    1232 log.go:172] (0xc000828140) (1) Data frame sent\nI0222 14:09:12.194478    1232 log.go:172] (0xc0007cc0b0) (0xc000828140) Stream removed, broadcasting: 1\nI0222 14:09:12.194862    1232 log.go:172] (0xc0007cc0b0) (0xc0005c6320) Stream removed, broadcasting: 3\nI0222 14:09:12.194905    1232 log.go:172] (0xc0007cc0b0) (0xc000828280) Stream removed, broadcasting: 5\nI0222 14:09:12.194938    1232 log.go:172] (0xc0007cc0b0) Go away received\nI0222 14:09:12.195374    1232 log.go:172] (0xc0007cc0b0) (0xc000828140) Stream removed, broadcasting: 1\nI0222 14:09:12.195386    1232 log.go:172] (0xc0007cc0b0) (0xc0005c6320) Stream removed, broadcasting: 3\nI0222 14:09:12.195390    1232 log.go:172] (0xc0007cc0b0) (0xc000828280) Stream removed, broadcasting: 5\n"
Feb 22 14:09:12.204: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb 22 14:09:12.204: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb 22 14:09:22.233: INFO: Waiting for StatefulSet statefulset-647/ss2 to complete update
Feb 22 14:09:22.233: INFO: Waiting for Pod statefulset-647/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 22 14:09:22.233: INFO: Waiting for Pod statefulset-647/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 22 14:09:22.233: INFO: Waiting for Pod statefulset-647/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 22 14:09:32.244: INFO: Waiting for StatefulSet statefulset-647/ss2 to complete update
Feb 22 14:09:32.244: INFO: Waiting for Pod statefulset-647/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 22 14:09:32.244: INFO: Waiting for Pod statefulset-647/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 22 14:09:42.294: INFO: Waiting for StatefulSet statefulset-647/ss2 to complete update
Feb 22 14:09:42.294: INFO: Waiting for Pod statefulset-647/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 22 14:09:42.294: INFO: Waiting for Pod statefulset-647/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 22 14:09:52.255: INFO: Waiting for StatefulSet statefulset-647/ss2 to complete update
Feb 22 14:09:52.255: INFO: Waiting for Pod statefulset-647/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 22 14:10:02.252: INFO: Waiting for StatefulSet statefulset-647/ss2 to complete update
STEP: Rolling back to a previous revision
Feb 22 14:10:12.248: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-647 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 22 14:10:12.763: INFO: stderr: "I0222 14:10:12.429627    1249 log.go:172] (0xc000a38370) (0xc000940640) Create stream\nI0222 14:10:12.429731    1249 log.go:172] (0xc000a38370) (0xc000940640) Stream added, broadcasting: 1\nI0222 14:10:12.432747    1249 log.go:172] (0xc000a38370) Reply frame received for 1\nI0222 14:10:12.432828    1249 log.go:172] (0xc000a38370) (0xc0009a2000) Create stream\nI0222 14:10:12.432842    1249 log.go:172] (0xc000a38370) (0xc0009a2000) Stream added, broadcasting: 3\nI0222 14:10:12.433763    1249 log.go:172] (0xc000a38370) Reply frame received for 3\nI0222 14:10:12.433802    1249 log.go:172] (0xc000a38370) (0xc00067e320) Create stream\nI0222 14:10:12.433833    1249 log.go:172] (0xc000a38370) (0xc00067e320) Stream added, broadcasting: 5\nI0222 14:10:12.434880    1249 log.go:172] (0xc000a38370) Reply frame received for 5\nI0222 14:10:12.595699    1249 log.go:172] (0xc000a38370) Data frame received for 5\nI0222 14:10:12.595752    1249 log.go:172] (0xc00067e320) (5) Data frame handling\nI0222 14:10:12.595762    1249 log.go:172] (0xc00067e320) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0222 14:10:12.647432    1249 log.go:172] (0xc000a38370) Data frame received for 3\nI0222 14:10:12.647459    1249 log.go:172] (0xc0009a2000) (3) Data frame handling\nI0222 14:10:12.647472    1249 log.go:172] (0xc0009a2000) (3) Data frame sent\nI0222 14:10:12.753123    1249 log.go:172] (0xc000a38370) (0xc0009a2000) Stream removed, broadcasting: 3\nI0222 14:10:12.753221    1249 log.go:172] (0xc000a38370) Data frame received for 1\nI0222 14:10:12.753245    1249 log.go:172] (0xc000a38370) (0xc00067e320) Stream removed, broadcasting: 5\nI0222 14:10:12.753283    1249 log.go:172] (0xc000940640) (1) Data frame handling\nI0222 14:10:12.753306    1249 log.go:172] (0xc000940640) (1) Data frame sent\nI0222 14:10:12.753320    1249 log.go:172] (0xc000a38370) (0xc000940640) Stream removed, broadcasting: 1\nI0222 14:10:12.753352    1249 log.go:172] (0xc000a38370) Go away received\nI0222 14:10:12.754107    1249 log.go:172] (0xc000a38370) (0xc000940640) Stream removed, broadcasting: 1\nI0222 14:10:12.754133    1249 log.go:172] (0xc000a38370) (0xc0009a2000) Stream removed, broadcasting: 3\nI0222 14:10:12.754140    1249 log.go:172] (0xc000a38370) (0xc00067e320) Stream removed, broadcasting: 5\n"
Feb 22 14:10:12.763: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 22 14:10:12.763: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 22 14:10:22.888: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Feb 22 14:10:32.938: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-647 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 22 14:10:33.259: INFO: stderr: "I0222 14:10:33.098998    1270 log.go:172] (0xc00013a0b0) (0xc0008600a0) Create stream\nI0222 14:10:33.099082    1270 log.go:172] (0xc00013a0b0) (0xc0008600a0) Stream added, broadcasting: 1\nI0222 14:10:33.101323    1270 log.go:172] (0xc00013a0b0) Reply frame received for 1\nI0222 14:10:33.101364    1270 log.go:172] (0xc00013a0b0) (0xc0008f4000) Create stream\nI0222 14:10:33.101379    1270 log.go:172] (0xc00013a0b0) (0xc0008f4000) Stream added, broadcasting: 3\nI0222 14:10:33.102691    1270 log.go:172] (0xc00013a0b0) Reply frame received for 3\nI0222 14:10:33.102715    1270 log.go:172] (0xc00013a0b0) (0xc00067e1e0) Create stream\nI0222 14:10:33.102728    1270 log.go:172] (0xc00013a0b0) (0xc00067e1e0) Stream added, broadcasting: 5\nI0222 14:10:33.103880    1270 log.go:172] (0xc00013a0b0) Reply frame received for 5\nI0222 14:10:33.186466    1270 log.go:172] (0xc00013a0b0) Data frame received for 5\nI0222 14:10:33.186735    1270 log.go:172] (0xc00067e1e0) (5) Data frame handling\nI0222 14:10:33.186755    1270 log.go:172] (0xc00067e1e0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0222 14:10:33.187671    1270 log.go:172] (0xc00013a0b0) Data frame received for 3\nI0222 14:10:33.187680    1270 log.go:172] (0xc0008f4000) (3) Data frame handling\nI0222 14:10:33.187691    1270 log.go:172] (0xc0008f4000) (3) Data frame sent\nI0222 14:10:33.251256    1270 log.go:172] (0xc00013a0b0) (0xc0008f4000) Stream removed, broadcasting: 3\nI0222 14:10:33.251689    1270 log.go:172] (0xc00013a0b0) Data frame received for 1\nI0222 14:10:33.251824    1270 log.go:172] (0xc00013a0b0) (0xc00067e1e0) Stream removed, broadcasting: 5\nI0222 14:10:33.251866    1270 log.go:172] (0xc0008600a0) (1) Data frame handling\nI0222 14:10:33.251882    1270 log.go:172] (0xc0008600a0) (1) Data frame sent\nI0222 14:10:33.251891    1270 log.go:172] (0xc00013a0b0) (0xc0008600a0) Stream removed, broadcasting: 1\nI0222 14:10:33.251900    1270 log.go:172] (0xc00013a0b0) Go away received\nI0222 14:10:33.252709    1270 log.go:172] (0xc00013a0b0) (0xc0008600a0) Stream removed, broadcasting: 1\nI0222 14:10:33.252751    1270 log.go:172] (0xc00013a0b0) (0xc0008f4000) Stream removed, broadcasting: 3\nI0222 14:10:33.252771    1270 log.go:172] (0xc00013a0b0) (0xc00067e1e0) Stream removed, broadcasting: 5\n"
Feb 22 14:10:33.260: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb 22 14:10:33.260: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb 22 14:10:43.312: INFO: Waiting for StatefulSet statefulset-647/ss2 to complete update
Feb 22 14:10:43.313: INFO: Waiting for Pod statefulset-647/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb 22 14:10:43.313: INFO: Waiting for Pod statefulset-647/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb 22 14:10:53.331: INFO: Waiting for StatefulSet statefulset-647/ss2 to complete update
Feb 22 14:10:53.331: INFO: Waiting for Pod statefulset-647/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb 22 14:10:53.331: INFO: Waiting for Pod statefulset-647/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb 22 14:11:03.339: INFO: Waiting for StatefulSet statefulset-647/ss2 to complete update
Feb 22 14:11:03.340: INFO: Waiting for Pod statefulset-647/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb 22 14:11:13.341: INFO: Waiting for StatefulSet statefulset-647/ss2 to complete update
Feb 22 14:11:13.341: INFO: Waiting for Pod statefulset-647/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb 22 14:11:23.329: INFO: Waiting for StatefulSet statefulset-647/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Feb 22 14:11:33.356: INFO: Deleting all statefulset in ns statefulset-647
Feb 22 14:11:33.365: INFO: Scaling statefulset ss2 to 0
Feb 22 14:12:03.538: INFO: Waiting for statefulset status.replicas updated to 0
Feb 22 14:12:03.546: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 14:12:03.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-647" for this suite.
Feb 22 14:12:11.620: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 14:12:11.760: INFO: namespace statefulset-647 deletion completed in 8.183843841s

• [SLOW TEST:232.952 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should perform rolling updates and roll backs of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 14:12:11.762: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating service endpoint-test2 in namespace services-5767
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5767 to expose endpoints map[]
Feb 22 14:12:12.437: INFO: successfully validated that service endpoint-test2 in namespace services-5767 exposes endpoints map[] (419.726628ms elapsed)
STEP: Creating pod pod1 in namespace services-5767
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5767 to expose endpoints map[pod1:[80]]
Feb 22 14:12:17.158: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.093397208s elapsed, will retry)
Feb 22 14:12:21.248: INFO: successfully validated that service endpoint-test2 in namespace services-5767 exposes endpoints map[pod1:[80]] (8.183543894s elapsed)
STEP: Creating pod pod2 in namespace services-5767
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5767 to expose endpoints map[pod1:[80] pod2:[80]]
Feb 22 14:12:26.301: INFO: Unexpected endpoints: found map[72df9476-f546-44ab-a968-5ac55992ed74:[80]], expected map[pod1:[80] pod2:[80]] (5.04748541s elapsed, will retry)
Feb 22 14:12:31.971: INFO: successfully validated that service endpoint-test2 in namespace services-5767 exposes endpoints map[pod1:[80] pod2:[80]] (10.717389251s elapsed)
STEP: Deleting pod pod1 in namespace services-5767
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5767 to expose endpoints map[pod2:[80]]
Feb 22 14:12:34.774: INFO: successfully validated that service endpoint-test2 in namespace services-5767 exposes endpoints map[pod2:[80]] (2.792087671s elapsed)
STEP: Deleting pod pod2 in namespace services-5767
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5767 to expose endpoints map[]
Feb 22 14:12:34.912: INFO: successfully validated that service endpoint-test2 in namespace services-5767 exposes endpoints map[] (78.321169ms elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 14:12:35.221: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-5767" for this suite.
Feb 22 14:12:58.691: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 14:12:58.847: INFO: namespace services-5767 deletion completed in 23.592728544s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:47.086 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 14:12:58.849: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-219bb3fa-88a5-4813-a5a9-c94b5fca5d8d
STEP: Creating a pod to test consume configMaps
Feb 22 14:12:58.988: INFO: Waiting up to 5m0s for pod "pod-configmaps-9d28821e-a293-4d5b-a551-938830421c2c" in namespace "configmap-6078" to be "success or failure"
Feb 22 14:12:59.012: INFO: Pod "pod-configmaps-9d28821e-a293-4d5b-a551-938830421c2c": Phase="Pending", Reason="", readiness=false. Elapsed: 24.01863ms
Feb 22 14:13:01.026: INFO: Pod "pod-configmaps-9d28821e-a293-4d5b-a551-938830421c2c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038111265s
Feb 22 14:13:03.038: INFO: Pod "pod-configmaps-9d28821e-a293-4d5b-a551-938830421c2c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.049966831s
Feb 22 14:13:05.049: INFO: Pod "pod-configmaps-9d28821e-a293-4d5b-a551-938830421c2c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.061206772s
Feb 22 14:13:07.111: INFO: Pod "pod-configmaps-9d28821e-a293-4d5b-a551-938830421c2c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.123447617s
Feb 22 14:13:10.055: INFO: Pod "pod-configmaps-9d28821e-a293-4d5b-a551-938830421c2c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.067530629s
STEP: Saw pod success
Feb 22 14:13:10.056: INFO: Pod "pod-configmaps-9d28821e-a293-4d5b-a551-938830421c2c" satisfied condition "success or failure"
Feb 22 14:13:10.075: INFO: Trying to get logs from node iruya-node pod pod-configmaps-9d28821e-a293-4d5b-a551-938830421c2c container configmap-volume-test: 
STEP: delete the pod
Feb 22 14:13:10.165: INFO: Waiting for pod pod-configmaps-9d28821e-a293-4d5b-a551-938830421c2c to disappear
Feb 22 14:13:10.175: INFO: Pod pod-configmaps-9d28821e-a293-4d5b-a551-938830421c2c no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 14:13:10.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6078" for this suite.
Feb 22 14:13:16.206: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 14:13:16.357: INFO: namespace configmap-6078 deletion completed in 6.177089087s

• [SLOW TEST:17.508 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 14:13:16.359: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-map-5b983a95-f697-4c28-841b-8ee8a966194e
STEP: Creating a pod to test consume secrets
Feb 22 14:13:16.445: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-91204251-2909-433a-8580-27b42d5881c9" in namespace "projected-886" to be "success or failure"
Feb 22 14:13:16.471: INFO: Pod "pod-projected-secrets-91204251-2909-433a-8580-27b42d5881c9": Phase="Pending", Reason="", readiness=false. Elapsed: 25.697547ms
Feb 22 14:13:18.486: INFO: Pod "pod-projected-secrets-91204251-2909-433a-8580-27b42d5881c9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041406107s
Feb 22 14:13:21.414: INFO: Pod "pod-projected-secrets-91204251-2909-433a-8580-27b42d5881c9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.968929816s
Feb 22 14:13:23.425: INFO: Pod "pod-projected-secrets-91204251-2909-433a-8580-27b42d5881c9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.980218241s
Feb 22 14:13:25.434: INFO: Pod "pod-projected-secrets-91204251-2909-433a-8580-27b42d5881c9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.988778557s
Feb 22 14:13:27.443: INFO: Pod "pod-projected-secrets-91204251-2909-433a-8580-27b42d5881c9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.998418849s
STEP: Saw pod success
Feb 22 14:13:27.443: INFO: Pod "pod-projected-secrets-91204251-2909-433a-8580-27b42d5881c9" satisfied condition "success or failure"
Feb 22 14:13:27.447: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-91204251-2909-433a-8580-27b42d5881c9 container projected-secret-volume-test: 
STEP: delete the pod
Feb 22 14:13:27.543: INFO: Waiting for pod pod-projected-secrets-91204251-2909-433a-8580-27b42d5881c9 to disappear
Feb 22 14:13:27.548: INFO: Pod pod-projected-secrets-91204251-2909-433a-8580-27b42d5881c9 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 14:13:27.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-886" for this suite.
Feb 22 14:13:33.588: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 14:13:33.741: INFO: namespace projected-886 deletion completed in 6.187853228s

• [SLOW TEST:17.383 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 14:13:33.742: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Feb 22 14:13:40.099: INFO: 0 pods remaining
Feb 22 14:13:40.099: INFO: 0 pods has nil DeletionTimestamp
Feb 22 14:13:40.099: INFO: 
STEP: Gathering metrics
W0222 14:13:40.940752       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 22 14:13:40.940: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 14:13:40.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-4602" for this suite.
Feb 22 14:13:54.117: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 14:13:56.339: INFO: namespace gc-4602 deletion completed in 15.3937416s

• [SLOW TEST:22.598 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 14:13:56.342: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Feb 22 14:13:56.447: INFO: PodSpec: initContainers in spec.initContainers
Feb 22 14:15:03.257: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-6ea25ae5-961e-4575-a3f2-86d82c050fef", GenerateName:"", Namespace:"init-container-864", SelfLink:"/api/v1/namespaces/init-container-864/pods/pod-init-6ea25ae5-961e-4575-a3f2-86d82c050fef", UID:"cb942941-5b62-4c61-b8c3-71fe82d36cf0", ResourceVersion:"25334268", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63717977636, loc:(*time.Location)(0x7ea48a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"447766464"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-gvhdd", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc002c9d0c0), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-gvhdd", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-gvhdd", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-gvhdd", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002f8e738), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"iruya-node", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00273a900), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002f8e7c0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002f8e7e0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc002f8e7e8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc002f8e7ec), PreemptionPolicy:(*v1.PreemptionPolicy)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717977636, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717977636, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717977636, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717977636, loc:(*time.Location)(0x7ea48a0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.3.65", PodIP:"10.44.0.1", StartTime:(*v1.Time)(0xc00255eda0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0025aa620)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0025aa690)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://ae2a11478f835fd862de8a827bf15efe6715f44b443bde07e8f3c7472388fa52"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00255ede0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00255edc0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 14:15:03.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-864" for this suite.
Feb 22 14:15:27.400: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 14:15:27.550: INFO: namespace init-container-864 deletion completed in 24.22420441s

• [SLOW TEST:91.208 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run --rm job 
  should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 14:15:27.550: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: executing a command with run --rm and attach with stdin
Feb 22 14:15:27.660: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9051 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed''
Feb 22 14:15:38.193: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0222 14:15:36.312322    1288 log.go:172] (0xc00089e2c0) (0xc0006cc780) Create stream\nI0222 14:15:36.312595    1288 log.go:172] (0xc00089e2c0) (0xc0006cc780) Stream added, broadcasting: 1\nI0222 14:15:36.337276    1288 log.go:172] (0xc00089e2c0) Reply frame received for 1\nI0222 14:15:36.337367    1288 log.go:172] (0xc00089e2c0) (0xc0006cc280) Create stream\nI0222 14:15:36.337385    1288 log.go:172] (0xc00089e2c0) (0xc0006cc280) Stream added, broadcasting: 3\nI0222 14:15:36.339178    1288 log.go:172] (0xc00089e2c0) Reply frame received for 3\nI0222 14:15:36.339229    1288 log.go:172] (0xc00089e2c0) (0xc0002a4000) Create stream\nI0222 14:15:36.339240    1288 log.go:172] (0xc00089e2c0) (0xc0002a4000) Stream added, broadcasting: 5\nI0222 14:15:36.341158    1288 log.go:172] (0xc00089e2c0) Reply frame received for 5\nI0222 14:15:36.341269    1288 log.go:172] (0xc00089e2c0) (0xc0006cc320) Create stream\nI0222 14:15:36.341282    1288 log.go:172] (0xc00089e2c0) (0xc0006cc320) Stream added, broadcasting: 7\nI0222 14:15:36.345051    1288 log.go:172] (0xc00089e2c0) Reply frame received for 7\nI0222 14:15:36.345421    1288 log.go:172] (0xc0006cc280) (3) Writing data frame\nI0222 14:15:36.345612    1288 log.go:172] (0xc0006cc280) (3) Writing data frame\nI0222 14:15:36.359152    1288 log.go:172] (0xc00089e2c0) Data frame received for 5\nI0222 14:15:36.359169    1288 log.go:172] (0xc0002a4000) (5) Data frame handling\nI0222 14:15:36.359182    1288 log.go:172] (0xc0002a4000) (5) Data frame sent\nI0222 14:15:36.366360    1288 log.go:172] (0xc00089e2c0) Data frame received for 5\nI0222 14:15:36.366386    1288 log.go:172] (0xc0002a4000) (5) Data frame handling\nI0222 14:15:36.366412    1288 log.go:172] (0xc0002a4000) (5) Data frame sent\nI0222 14:15:38.160408    1288 log.go:172] (0xc00089e2c0) Data frame received for 1\nI0222 14:15:38.160620    1288 log.go:172] (0xc00089e2c0) (0xc0006cc280) Stream removed, broadcasting: 3\nI0222 14:15:38.160736    1288 log.go:172] (0xc0006cc780) (1) Data frame handling\nI0222 14:15:38.160749    1288 log.go:172] (0xc0006cc780) (1) Data frame sent\nI0222 14:15:38.160757    1288 log.go:172] (0xc00089e2c0) (0xc0006cc780) Stream removed, broadcasting: 1\nI0222 14:15:38.161631    1288 log.go:172] (0xc00089e2c0) (0xc0002a4000) Stream removed, broadcasting: 5\nI0222 14:15:38.161774    1288 log.go:172] (0xc00089e2c0) (0xc0006cc320) Stream removed, broadcasting: 7\nI0222 14:15:38.161805    1288 log.go:172] (0xc00089e2c0) Go away received\nI0222 14:15:38.161859    1288 log.go:172] (0xc00089e2c0) (0xc0006cc780) Stream removed, broadcasting: 1\nI0222 14:15:38.161880    1288 log.go:172] (0xc00089e2c0) (0xc0006cc280) Stream removed, broadcasting: 3\nI0222 14:15:38.161888    1288 log.go:172] (0xc00089e2c0) (0xc0002a4000) Stream removed, broadcasting: 5\nI0222 14:15:38.161978    1288 log.go:172] (0xc00089e2c0) (0xc0006cc320) Stream removed, broadcasting: 7\n"
Feb 22 14:15:38.193: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n"
STEP: verifying the job e2e-test-rm-busybox-job was deleted
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 14:15:40.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9051" for this suite.
Feb 22 14:15:46.338: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 14:15:46.533: INFO: namespace kubectl-9051 deletion completed in 6.315965041s

• [SLOW TEST:18.984 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run --rm job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a job from an image, then delete the job  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 14:15:46.535: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating Pod
STEP: Waiting for the pod running
STEP: Geting the pod
STEP: Reading file content from the nginx-container
Feb 22 14:16:00.784: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec pod-sharedvolume-d3a56767-a947-49df-beb9-1d4bd6a1e308 -c busybox-main-container --namespace=emptydir-6175 -- cat /usr/share/volumeshare/shareddata.txt'
Feb 22 14:16:01.288: INFO: stderr: "I0222 14:16:00.961812    1308 log.go:172] (0xc0008c8160) (0xc0008e68c0) Create stream\nI0222 14:16:00.961945    1308 log.go:172] (0xc0008c8160) (0xc0008e68c0) Stream added, broadcasting: 1\nI0222 14:16:00.967060    1308 log.go:172] (0xc0008c8160) Reply frame received for 1\nI0222 14:16:00.967082    1308 log.go:172] (0xc0008c8160) (0xc0004200a0) Create stream\nI0222 14:16:00.967087    1308 log.go:172] (0xc0008c8160) (0xc0004200a0) Stream added, broadcasting: 3\nI0222 14:16:00.968211    1308 log.go:172] (0xc0008c8160) Reply frame received for 3\nI0222 14:16:00.968234    1308 log.go:172] (0xc0008c8160) (0xc0002b2000) Create stream\nI0222 14:16:00.968242    1308 log.go:172] (0xc0008c8160) (0xc0002b2000) Stream added, broadcasting: 5\nI0222 14:16:00.969387    1308 log.go:172] (0xc0008c8160) Reply frame received for 5\nI0222 14:16:01.123525    1308 log.go:172] (0xc0008c8160) Data frame received for 3\nI0222 14:16:01.123588    1308 log.go:172] (0xc0004200a0) (3) Data frame handling\nI0222 14:16:01.123630    1308 log.go:172] (0xc0004200a0) (3) Data frame sent\nI0222 14:16:01.276114    1308 log.go:172] (0xc0008c8160) (0xc0004200a0) Stream removed, broadcasting: 3\nI0222 14:16:01.276277    1308 log.go:172] (0xc0008c8160) (0xc0002b2000) Stream removed, broadcasting: 5\nI0222 14:16:01.276348    1308 log.go:172] (0xc0008c8160) Data frame received for 1\nI0222 14:16:01.276371    1308 log.go:172] (0xc0008e68c0) (1) Data frame handling\nI0222 14:16:01.276392    1308 log.go:172] (0xc0008e68c0) (1) Data frame sent\nI0222 14:16:01.276404    1308 log.go:172] (0xc0008c8160) (0xc0008e68c0) Stream removed, broadcasting: 1\nI0222 14:16:01.276424    1308 log.go:172] (0xc0008c8160) Go away received\nI0222 14:16:01.277136    1308 log.go:172] (0xc0008c8160) (0xc0008e68c0) Stream removed, broadcasting: 1\nI0222 14:16:01.277212    1308 log.go:172] (0xc0008c8160) (0xc0004200a0) Stream removed, broadcasting: 3\nI0222 14:16:01.277232    1308 log.go:172] (0xc0008c8160) (0xc0002b2000) Stream removed, broadcasting: 5\n"
Feb 22 14:16:01.289: INFO: stdout: "Hello from the busy-box sub-container\n"
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 14:16:01.289: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6175" for this suite.
Feb 22 14:16:07.423: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 14:16:07.548: INFO: namespace emptydir-6175 deletion completed in 6.159997611s

• [SLOW TEST:21.013 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 14:16:07.548: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Feb 22 14:16:18.278: INFO: Successfully updated pod "annotationupdate97f6d6a6-f1b7-4c25-8445-6923a26f3737"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 14:16:20.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3197" for this suite.
Feb 22 14:16:42.432: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 14:16:42.553: INFO: namespace downward-api-3197 deletion completed in 22.164002194s

• [SLOW TEST:35.005 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 14:16:42.554: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Feb 22 14:16:50.914: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 14:16:50.985: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-7296" for this suite.
Feb 22 14:16:57.021: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 14:16:57.187: INFO: namespace container-runtime-7296 deletion completed in 6.19549037s

• [SLOW TEST:14.633 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 14:16:57.188: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Feb 22 14:17:07.963: INFO: Successfully updated pod "labelsupdate66fe33bc-57ec-45d9-92a5-6cbeac2aac88"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 14:17:10.069: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4015" for this suite.
Feb 22 14:17:32.101: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 14:17:32.239: INFO: namespace projected-4015 deletion completed in 22.161946063s

• [SLOW TEST:35.052 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 14:17:32.240: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Feb 22 14:17:48.681: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb 22 14:17:48.690: INFO: Pod pod-with-poststart-http-hook still exists
Feb 22 14:17:50.690: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb 22 14:17:50.701: INFO: Pod pod-with-poststart-http-hook still exists
Feb 22 14:17:52.690: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb 22 14:17:52.697: INFO: Pod pod-with-poststart-http-hook still exists
Feb 22 14:17:54.690: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb 22 14:17:54.732: INFO: Pod pod-with-poststart-http-hook still exists
Feb 22 14:17:56.690: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb 22 14:17:56.702: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 14:17:56.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-6081" for this suite.
Feb 22 14:18:18.793: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 14:18:18.927: INFO: namespace container-lifecycle-hook-6081 deletion completed in 22.219591811s

• [SLOW TEST:46.687 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 14:18:18.928: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 14:18:27.131: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-9919" for this suite.
Feb 22 14:19:19.163: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 14:19:19.326: INFO: namespace kubelet-test-9919 deletion completed in 52.188560808s

• [SLOW TEST:60.398 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a read only busybox container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187
    should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 14:19:19.326: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Starting the proxy
Feb 22 14:19:19.500: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix111740241/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 14:19:19.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1327" for this suite.
Feb 22 14:19:25.694: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 14:19:25.917: INFO: namespace kubectl-1327 deletion completed in 6.257141252s

• [SLOW TEST:6.591 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support --unix-socket=/path  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run default 
  should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 14:19:25.918: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1420
[It] should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb 22 14:19:25.958: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-730'
Feb 22 14:19:27.977: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb 22 14:19:27.978: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created
[AfterEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1426
Feb 22 14:19:30.005: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-730'
Feb 22 14:19:30.160: INFO: stderr: ""
Feb 22 14:19:30.160: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 14:19:30.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-730" for this suite.
Feb 22 14:19:36.289: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 14:19:36.453: INFO: namespace kubectl-730 deletion completed in 6.269276976s

• [SLOW TEST:10.536 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create an rc or deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 14:19:36.454: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Feb 22 14:19:36.612: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Feb 22 14:19:36.625: INFO: Waiting for terminating namespaces to be deleted...
Feb 22 14:19:36.628: INFO: 
Logging pods the kubelet thinks is on node iruya-node before test
Feb 22 14:19:36.654: INFO: kube-bench-j7kcs from default started at 2020-02-11 06:42:30 +0000 UTC (1 container statuses recorded)
Feb 22 14:19:36.654: INFO: 	Container kube-bench ready: false, restart count 0
Feb 22 14:19:36.654: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded)
Feb 22 14:19:36.654: INFO: 	Container kube-proxy ready: true, restart count 0
Feb 22 14:19:36.654: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded)
Feb 22 14:19:36.654: INFO: 	Container weave ready: true, restart count 0
Feb 22 14:19:36.654: INFO: 	Container weave-npc ready: true, restart count 0
Feb 22 14:19:36.654: INFO: 
Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test
Feb 22 14:19:36.671: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded)
Feb 22 14:19:36.671: INFO: 	Container kube-scheduler ready: true, restart count 15
Feb 22 14:19:36.671: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Feb 22 14:19:36.671: INFO: 	Container coredns ready: true, restart count 0
Feb 22 14:19:36.671: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded)
Feb 22 14:19:36.671: INFO: 	Container etcd ready: true, restart count 0
Feb 22 14:19:36.671: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded)
Feb 22 14:19:36.671: INFO: 	Container weave ready: true, restart count 0
Feb 22 14:19:36.671: INFO: 	Container weave-npc ready: true, restart count 0
Feb 22 14:19:36.671: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Feb 22 14:19:36.671: INFO: 	Container coredns ready: true, restart count 0
Feb 22 14:19:36.671: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded)
Feb 22 14:19:36.671: INFO: 	Container kube-controller-manager ready: true, restart count 23
Feb 22 14:19:36.671: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded)
Feb 22 14:19:36.671: INFO: 	Container kube-proxy ready: true, restart count 0
Feb 22 14:19:36.671: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded)
Feb 22 14:19:36.671: INFO: 	Container kube-apiserver ready: true, restart count 0
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.15f5bf4be074045d], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 14:19:37.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-1653" for this suite.
Feb 22 14:19:43.732: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 14:19:43.908: INFO: namespace sched-pred-1653 deletion completed in 6.199746428s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:7.454 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 14:19:43.909: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 22 14:19:43.996: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e01c6cab-8228-4e75-8af6-c2973070d2ee" in namespace "downward-api-1778" to be "success or failure"
Feb 22 14:19:44.007: INFO: Pod "downwardapi-volume-e01c6cab-8228-4e75-8af6-c2973070d2ee": Phase="Pending", Reason="", readiness=false. Elapsed: 11.260994ms
Feb 22 14:19:46.016: INFO: Pod "downwardapi-volume-e01c6cab-8228-4e75-8af6-c2973070d2ee": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020429689s
Feb 22 14:19:48.026: INFO: Pod "downwardapi-volume-e01c6cab-8228-4e75-8af6-c2973070d2ee": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029612639s
Feb 22 14:19:50.038: INFO: Pod "downwardapi-volume-e01c6cab-8228-4e75-8af6-c2973070d2ee": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041749265s
Feb 22 14:19:52.050: INFO: Pod "downwardapi-volume-e01c6cab-8228-4e75-8af6-c2973070d2ee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.053797939s
STEP: Saw pod success
Feb 22 14:19:52.050: INFO: Pod "downwardapi-volume-e01c6cab-8228-4e75-8af6-c2973070d2ee" satisfied condition "success or failure"
Feb 22 14:19:52.055: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-e01c6cab-8228-4e75-8af6-c2973070d2ee container client-container: 
STEP: delete the pod
Feb 22 14:19:52.165: INFO: Waiting for pod downwardapi-volume-e01c6cab-8228-4e75-8af6-c2973070d2ee to disappear
Feb 22 14:19:52.172: INFO: Pod downwardapi-volume-e01c6cab-8228-4e75-8af6-c2973070d2ee no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 14:19:52.172: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1778" for this suite.
Feb 22 14:19:58.217: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 14:19:58.310: INFO: namespace downward-api-1778 deletion completed in 6.130051777s

• [SLOW TEST:14.401 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 14:19:58.312: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-v249r in namespace proxy-8514
I0222 14:19:58.469995       8 runners.go:180] Created replication controller with name: proxy-service-v249r, namespace: proxy-8514, replica count: 1
I0222 14:19:59.521497       8 runners.go:180] proxy-service-v249r Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0222 14:20:00.522370       8 runners.go:180] proxy-service-v249r Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0222 14:20:01.522916       8 runners.go:180] proxy-service-v249r Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0222 14:20:02.523399       8 runners.go:180] proxy-service-v249r Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0222 14:20:03.524124       8 runners.go:180] proxy-service-v249r Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0222 14:20:04.524830       8 runners.go:180] proxy-service-v249r Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0222 14:20:05.525291       8 runners.go:180] proxy-service-v249r Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0222 14:20:06.526492       8 runners.go:180] proxy-service-v249r Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0222 14:20:07.527396       8 runners.go:180] proxy-service-v249r Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0222 14:20:08.528165       8 runners.go:180] proxy-service-v249r Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0222 14:20:09.528711       8 runners.go:180] proxy-service-v249r Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0222 14:20:10.529379       8 runners.go:180] proxy-service-v249r Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0222 14:20:11.529836       8 runners.go:180] proxy-service-v249r Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0222 14:20:12.530618       8 runners.go:180] proxy-service-v249r Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0222 14:20:13.531428       8 runners.go:180] proxy-service-v249r Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0222 14:20:14.532124       8 runners.go:180] proxy-service-v249r Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0222 14:20:15.532548       8 runners.go:180] proxy-service-v249r Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0222 14:20:16.533169       8 runners.go:180] proxy-service-v249r Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Feb 22 14:20:16.543: INFO: setup took 18.152294605s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Feb 22 14:20:16.584: INFO: (0) /api/v1/namespaces/proxy-8514/pods/http:proxy-service-v249r-q9bjc:1080/proxy/: ... (200; 40.869767ms)
Feb 22 14:20:16.584: INFO: (0) /api/v1/namespaces/proxy-8514/pods/proxy-service-v249r-q9bjc:1080/proxy/: test<... (200; 40.461769ms)
Feb 22 14:20:16.584: INFO: (0) /api/v1/namespaces/proxy-8514/services/http:proxy-service-v249r:portname1/proxy/: foo (200; 40.660912ms)
Feb 22 14:20:16.584: INFO: (0) /api/v1/namespaces/proxy-8514/pods/http:proxy-service-v249r-q9bjc:160/proxy/: foo (200; 40.542929ms)
Feb 22 14:20:16.585: INFO: (0) /api/v1/namespaces/proxy-8514/pods/proxy-service-v249r-q9bjc:160/proxy/: foo (200; 41.430274ms)
Feb 22 14:20:16.584: INFO: (0) /api/v1/namespaces/proxy-8514/pods/proxy-service-v249r-q9bjc/proxy/: test (200; 40.91054ms)
Feb 22 14:20:16.584: INFO: (0) /api/v1/namespaces/proxy-8514/services/proxy-service-v249r:portname1/proxy/: foo (200; 40.66334ms)
Feb 22 14:20:16.588: INFO: (0) /api/v1/namespaces/proxy-8514/pods/proxy-service-v249r-q9bjc:162/proxy/: bar (200; 44.250448ms)
Feb 22 14:20:16.588: INFO: (0) /api/v1/namespaces/proxy-8514/pods/http:proxy-service-v249r-q9bjc:162/proxy/: bar (200; 44.332309ms)
Feb 22 14:20:16.588: INFO: (0) /api/v1/namespaces/proxy-8514/services/http:proxy-service-v249r:portname2/proxy/: bar (200; 44.748965ms)
Feb 22 14:20:16.589: INFO: (0) /api/v1/namespaces/proxy-8514/services/proxy-service-v249r:portname2/proxy/: bar (200; 45.609659ms)
Feb 22 14:20:16.596: INFO: (0) /api/v1/namespaces/proxy-8514/services/https:proxy-service-v249r:tlsportname1/proxy/: tls baz (200; 52.411363ms)
Feb 22 14:20:16.596: INFO: (0) /api/v1/namespaces/proxy-8514/pods/https:proxy-service-v249r-q9bjc:460/proxy/: tls baz (200; 52.174287ms)
Feb 22 14:20:16.596: INFO: (0) /api/v1/namespaces/proxy-8514/pods/https:proxy-service-v249r-q9bjc:462/proxy/: tls qux (200; 52.475335ms)
Feb 22 14:20:16.601: INFO: (0) /api/v1/namespaces/proxy-8514/pods/https:proxy-service-v249r-q9bjc:443/proxy/: test (200; 10.242065ms)
Feb 22 14:20:16.612: INFO: (1) /api/v1/namespaces/proxy-8514/pods/http:proxy-service-v249r-q9bjc:1080/proxy/: ... (200; 10.290253ms)
Feb 22 14:20:16.612: INFO: (1) /api/v1/namespaces/proxy-8514/pods/http:proxy-service-v249r-q9bjc:160/proxy/: foo (200; 10.643485ms)
Feb 22 14:20:16.614: INFO: (1) /api/v1/namespaces/proxy-8514/pods/proxy-service-v249r-q9bjc:160/proxy/: foo (200; 12.212502ms)
Feb 22 14:20:16.614: INFO: (1) /api/v1/namespaces/proxy-8514/pods/https:proxy-service-v249r-q9bjc:443/proxy/: test<... (200; 13.085416ms)
Feb 22 14:20:16.615: INFO: (1) /api/v1/namespaces/proxy-8514/pods/https:proxy-service-v249r-q9bjc:462/proxy/: tls qux (200; 13.913792ms)
Feb 22 14:20:16.615: INFO: (1) /api/v1/namespaces/proxy-8514/pods/https:proxy-service-v249r-q9bjc:460/proxy/: tls baz (200; 13.934958ms)
Feb 22 14:20:16.617: INFO: (1) /api/v1/namespaces/proxy-8514/services/proxy-service-v249r:portname1/proxy/: foo (200; 16.245834ms)
Feb 22 14:20:16.617: INFO: (1) /api/v1/namespaces/proxy-8514/services/http:proxy-service-v249r:portname2/proxy/: bar (200; 15.864045ms)
Feb 22 14:20:16.617: INFO: (1) /api/v1/namespaces/proxy-8514/services/https:proxy-service-v249r:tlsportname2/proxy/: tls qux (200; 15.87309ms)
Feb 22 14:20:16.617: INFO: (1) /api/v1/namespaces/proxy-8514/services/http:proxy-service-v249r:portname1/proxy/: foo (200; 16.520093ms)
Feb 22 14:20:16.618: INFO: (1) /api/v1/namespaces/proxy-8514/services/https:proxy-service-v249r:tlsportname1/proxy/: tls baz (200; 17.025765ms)
Feb 22 14:20:16.619: INFO: (1) /api/v1/namespaces/proxy-8514/services/proxy-service-v249r:portname2/proxy/: bar (200; 17.26146ms)
Feb 22 14:20:16.623: INFO: (2) /api/v1/namespaces/proxy-8514/pods/https:proxy-service-v249r-q9bjc:460/proxy/: tls baz (200; 4.238964ms)
Feb 22 14:20:16.623: INFO: (2) /api/v1/namespaces/proxy-8514/pods/proxy-service-v249r-q9bjc/proxy/: test (200; 4.224232ms)
Feb 22 14:20:16.623: INFO: (2) /api/v1/namespaces/proxy-8514/pods/http:proxy-service-v249r-q9bjc:1080/proxy/: ... (200; 4.667878ms)
Feb 22 14:20:16.624: INFO: (2) /api/v1/namespaces/proxy-8514/pods/proxy-service-v249r-q9bjc:162/proxy/: bar (200; 5.118381ms)
Feb 22 14:20:16.625: INFO: (2) /api/v1/namespaces/proxy-8514/pods/https:proxy-service-v249r-q9bjc:443/proxy/: test<... (200; 6.262828ms)
Feb 22 14:20:16.626: INFO: (2) /api/v1/namespaces/proxy-8514/pods/http:proxy-service-v249r-q9bjc:160/proxy/: foo (200; 6.885819ms)
Feb 22 14:20:16.626: INFO: (2) /api/v1/namespaces/proxy-8514/pods/proxy-service-v249r-q9bjc:160/proxy/: foo (200; 6.9226ms)
Feb 22 14:20:16.626: INFO: (2) /api/v1/namespaces/proxy-8514/pods/https:proxy-service-v249r-q9bjc:462/proxy/: tls qux (200; 6.812482ms)
Feb 22 14:20:16.627: INFO: (2) /api/v1/namespaces/proxy-8514/services/proxy-service-v249r:portname1/proxy/: foo (200; 7.729466ms)
Feb 22 14:20:16.629: INFO: (2) /api/v1/namespaces/proxy-8514/pods/http:proxy-service-v249r-q9bjc:162/proxy/: bar (200; 9.855782ms)
Feb 22 14:20:16.629: INFO: (2) /api/v1/namespaces/proxy-8514/services/https:proxy-service-v249r:tlsportname1/proxy/: tls baz (200; 9.863298ms)
Feb 22 14:20:16.629: INFO: (2) /api/v1/namespaces/proxy-8514/services/http:proxy-service-v249r:portname2/proxy/: bar (200; 10.144182ms)
Feb 22 14:20:16.629: INFO: (2) /api/v1/namespaces/proxy-8514/services/http:proxy-service-v249r:portname1/proxy/: foo (200; 9.940163ms)
Feb 22 14:20:16.629: INFO: (2) /api/v1/namespaces/proxy-8514/services/proxy-service-v249r:portname2/proxy/: bar (200; 10.297222ms)
Feb 22 14:20:16.629: INFO: (2) /api/v1/namespaces/proxy-8514/services/https:proxy-service-v249r:tlsportname2/proxy/: tls qux (200; 10.285228ms)
Feb 22 14:20:16.633: INFO: (3) /api/v1/namespaces/proxy-8514/pods/http:proxy-service-v249r-q9bjc:160/proxy/: foo (200; 4.017115ms)
Feb 22 14:20:16.635: INFO: (3) /api/v1/namespaces/proxy-8514/pods/proxy-service-v249r-q9bjc:1080/proxy/: test<... (200; 5.634445ms)
Feb 22 14:20:16.635: INFO: (3) /api/v1/namespaces/proxy-8514/pods/https:proxy-service-v249r-q9bjc:460/proxy/: tls baz (200; 6.004891ms)
Feb 22 14:20:16.636: INFO: (3) /api/v1/namespaces/proxy-8514/pods/https:proxy-service-v249r-q9bjc:443/proxy/: test (200; 7.687113ms)
Feb 22 14:20:16.638: INFO: (3) /api/v1/namespaces/proxy-8514/pods/https:proxy-service-v249r-q9bjc:462/proxy/: tls qux (200; 8.412215ms)
Feb 22 14:20:16.642: INFO: (3) /api/v1/namespaces/proxy-8514/pods/proxy-service-v249r-q9bjc:160/proxy/: foo (200; 12.149713ms)
Feb 22 14:20:16.642: INFO: (3) /api/v1/namespaces/proxy-8514/services/proxy-service-v249r:portname1/proxy/: foo (200; 12.687657ms)
Feb 22 14:20:16.642: INFO: (3) /api/v1/namespaces/proxy-8514/pods/http:proxy-service-v249r-q9bjc:1080/proxy/: ... (200; 12.731096ms)
Feb 22 14:20:16.642: INFO: (3) /api/v1/namespaces/proxy-8514/services/http:proxy-service-v249r:portname2/proxy/: bar (200; 12.927708ms)
Feb 22 14:20:16.642: INFO: (3) /api/v1/namespaces/proxy-8514/services/proxy-service-v249r:portname2/proxy/: bar (200; 13.022578ms)
Feb 22 14:20:16.643: INFO: (3) /api/v1/namespaces/proxy-8514/services/https:proxy-service-v249r:tlsportname2/proxy/: tls qux (200; 13.152178ms)
Feb 22 14:20:16.643: INFO: (3) /api/v1/namespaces/proxy-8514/services/https:proxy-service-v249r:tlsportname1/proxy/: tls baz (200; 13.351003ms)
Feb 22 14:20:16.650: INFO: (3) /api/v1/namespaces/proxy-8514/services/http:proxy-service-v249r:portname1/proxy/: foo (200; 20.722717ms)
Feb 22 14:20:16.660: INFO: (4) /api/v1/namespaces/proxy-8514/pods/https:proxy-service-v249r-q9bjc:460/proxy/: tls baz (200; 10.038595ms)
Feb 22 14:20:16.660: INFO: (4) /api/v1/namespaces/proxy-8514/pods/http:proxy-service-v249r-q9bjc:160/proxy/: foo (200; 10.135682ms)
Feb 22 14:20:16.671: INFO: (4) /api/v1/namespaces/proxy-8514/services/http:proxy-service-v249r:portname1/proxy/: foo (200; 20.544503ms)
Feb 22 14:20:16.672: INFO: (4) /api/v1/namespaces/proxy-8514/pods/proxy-service-v249r-q9bjc:162/proxy/: bar (200; 21.284873ms)
Feb 22 14:20:16.672: INFO: (4) /api/v1/namespaces/proxy-8514/pods/proxy-service-v249r-q9bjc:160/proxy/: foo (200; 21.484235ms)
Feb 22 14:20:16.672: INFO: (4) /api/v1/namespaces/proxy-8514/pods/https:proxy-service-v249r-q9bjc:443/proxy/: ... (200; 21.693478ms)
Feb 22 14:20:16.672: INFO: (4) /api/v1/namespaces/proxy-8514/services/https:proxy-service-v249r:tlsportname1/proxy/: tls baz (200; 22.015229ms)
Feb 22 14:20:16.672: INFO: (4) /api/v1/namespaces/proxy-8514/pods/proxy-service-v249r-q9bjc:1080/proxy/: test<... (200; 21.782669ms)
Feb 22 14:20:16.673: INFO: (4) /api/v1/namespaces/proxy-8514/pods/proxy-service-v249r-q9bjc/proxy/: test (200; 22.978635ms)
Feb 22 14:20:16.674: INFO: (4) /api/v1/namespaces/proxy-8514/services/proxy-service-v249r:portname1/proxy/: foo (200; 22.896687ms)
Feb 22 14:20:16.674: INFO: (4) /api/v1/namespaces/proxy-8514/services/proxy-service-v249r:portname2/proxy/: bar (200; 22.825277ms)
Feb 22 14:20:16.674: INFO: (4) /api/v1/namespaces/proxy-8514/services/http:proxy-service-v249r:portname2/proxy/: bar (200; 23.007579ms)
Feb 22 14:20:16.701: INFO: (5) /api/v1/namespaces/proxy-8514/pods/http:proxy-service-v249r-q9bjc:160/proxy/: foo (200; 27.628417ms)
Feb 22 14:20:16.701: INFO: (5) /api/v1/namespaces/proxy-8514/pods/proxy-service-v249r-q9bjc:1080/proxy/: test<... (200; 27.23651ms)
Feb 22 14:20:16.701: INFO: (5) /api/v1/namespaces/proxy-8514/pods/http:proxy-service-v249r-q9bjc:162/proxy/: bar (200; 27.626079ms)
Feb 22 14:20:16.702: INFO: (5) /api/v1/namespaces/proxy-8514/pods/proxy-service-v249r-q9bjc/proxy/: test (200; 27.415482ms)
Feb 22 14:20:16.704: INFO: (5) /api/v1/namespaces/proxy-8514/services/http:proxy-service-v249r:portname1/proxy/: foo (200; 30.208567ms)
Feb 22 14:20:16.705: INFO: (5) /api/v1/namespaces/proxy-8514/pods/https:proxy-service-v249r-q9bjc:443/proxy/: ... (200; 32.629527ms)
Feb 22 14:20:16.707: INFO: (5) /api/v1/namespaces/proxy-8514/services/http:proxy-service-v249r:portname2/proxy/: bar (200; 32.620749ms)
Feb 22 14:20:16.707: INFO: (5) /api/v1/namespaces/proxy-8514/services/https:proxy-service-v249r:tlsportname1/proxy/: tls baz (200; 33.079906ms)
Feb 22 14:20:16.707: INFO: (5) /api/v1/namespaces/proxy-8514/pods/proxy-service-v249r-q9bjc:160/proxy/: foo (200; 32.702071ms)
Feb 22 14:20:16.723: INFO: (6) /api/v1/namespaces/proxy-8514/pods/https:proxy-service-v249r-q9bjc:462/proxy/: tls qux (200; 14.764427ms)
Feb 22 14:20:16.723: INFO: (6) /api/v1/namespaces/proxy-8514/pods/https:proxy-service-v249r-q9bjc:443/proxy/: test (200; 15.448242ms)
Feb 22 14:20:16.724: INFO: (6) /api/v1/namespaces/proxy-8514/pods/proxy-service-v249r-q9bjc:162/proxy/: bar (200; 15.608714ms)
Feb 22 14:20:16.724: INFO: (6) /api/v1/namespaces/proxy-8514/pods/http:proxy-service-v249r-q9bjc:160/proxy/: foo (200; 16.992607ms)
Feb 22 14:20:16.724: INFO: (6) /api/v1/namespaces/proxy-8514/pods/http:proxy-service-v249r-q9bjc:162/proxy/: bar (200; 15.999383ms)
Feb 22 14:20:16.724: INFO: (6) /api/v1/namespaces/proxy-8514/pods/http:proxy-service-v249r-q9bjc:1080/proxy/: ... (200; 15.582764ms)
Feb 22 14:20:16.725: INFO: (6) /api/v1/namespaces/proxy-8514/pods/https:proxy-service-v249r-q9bjc:460/proxy/: tls baz (200; 16.390818ms)
Feb 22 14:20:16.725: INFO: (6) /api/v1/namespaces/proxy-8514/services/https:proxy-service-v249r:tlsportname1/proxy/: tls baz (200; 16.315504ms)
Feb 22 14:20:16.726: INFO: (6) /api/v1/namespaces/proxy-8514/services/https:proxy-service-v249r:tlsportname2/proxy/: tls qux (200; 17.51537ms)
Feb 22 14:20:16.726: INFO: (6) /api/v1/namespaces/proxy-8514/services/proxy-service-v249r:portname1/proxy/: foo (200; 18.229048ms)
Feb 22 14:20:16.726: INFO: (6) /api/v1/namespaces/proxy-8514/pods/proxy-service-v249r-q9bjc:1080/proxy/: test<... (200; 18.016175ms)
Feb 22 14:20:16.727: INFO: (6) /api/v1/namespaces/proxy-8514/services/http:proxy-service-v249r:portname1/proxy/: foo (200; 18.685849ms)
Feb 22 14:20:16.736: INFO: (6) /api/v1/namespaces/proxy-8514/services/proxy-service-v249r:portname2/proxy/: bar (200; 27.571464ms)
Feb 22 14:20:16.738: INFO: (6) /api/v1/namespaces/proxy-8514/pods/proxy-service-v249r-q9bjc:160/proxy/: foo (200; 30.11757ms)
Feb 22 14:20:16.738: INFO: (6) /api/v1/namespaces/proxy-8514/services/http:proxy-service-v249r:portname2/proxy/: bar (200; 29.842573ms)
Feb 22 14:20:16.783: INFO: (7) /api/v1/namespaces/proxy-8514/services/http:proxy-service-v249r:portname1/proxy/: foo (200; 44.002465ms)
Feb 22 14:20:16.783: INFO: (7) /api/v1/namespaces/proxy-8514/services/proxy-service-v249r:portname1/proxy/: foo (200; 44.253089ms)
Feb 22 14:20:16.783: INFO: (7) /api/v1/namespaces/proxy-8514/pods/proxy-service-v249r-q9bjc:1080/proxy/: test<... (200; 44.021469ms)
Feb 22 14:20:16.783: INFO: (7) /api/v1/namespaces/proxy-8514/services/proxy-service-v249r:portname2/proxy/: bar (200; 44.585057ms)
Feb 22 14:20:16.804: INFO: (7) /api/v1/namespaces/proxy-8514/pods/https:proxy-service-v249r-q9bjc:462/proxy/: tls qux (200; 65.258151ms)
Feb 22 14:20:16.804: INFO: (7) /api/v1/namespaces/proxy-8514/pods/http:proxy-service-v249r-q9bjc:160/proxy/: foo (200; 64.986552ms)
Feb 22 14:20:16.804: INFO: (7) /api/v1/namespaces/proxy-8514/pods/proxy-service-v249r-q9bjc:162/proxy/: bar (200; 64.917606ms)
Feb 22 14:20:16.804: INFO: (7) /api/v1/namespaces/proxy-8514/services/https:proxy-service-v249r:tlsportname2/proxy/: tls qux (200; 65.426738ms)
Feb 22 14:20:16.804: INFO: (7) /api/v1/namespaces/proxy-8514/services/https:proxy-service-v249r:tlsportname1/proxy/: tls baz (200; 65.154078ms)
Feb 22 14:20:16.804: INFO: (7) /api/v1/namespaces/proxy-8514/pods/proxy-service-v249r-q9bjc:160/proxy/: foo (200; 65.120332ms)
Feb 22 14:20:16.804: INFO: (7) /api/v1/namespaces/proxy-8514/pods/proxy-service-v249r-q9bjc/proxy/: test (200; 65.235507ms)
Feb 22 14:20:16.804: INFO: (7) /api/v1/namespaces/proxy-8514/pods/http:proxy-service-v249r-q9bjc:162/proxy/: bar (200; 65.398546ms)
Feb 22 14:20:16.804: INFO: (7) /api/v1/namespaces/proxy-8514/pods/https:proxy-service-v249r-q9bjc:460/proxy/: tls baz (200; 65.320628ms)
Feb 22 14:20:16.804: INFO: (7) /api/v1/namespaces/proxy-8514/pods/https:proxy-service-v249r-q9bjc:443/proxy/: ... (200; 84.328261ms)
Feb 22 14:20:16.823: INFO: (7) /api/v1/namespaces/proxy-8514/services/http:proxy-service-v249r:portname2/proxy/: bar (200; 84.756664ms)
Feb 22 14:20:16.853: INFO: (8) /api/v1/namespaces/proxy-8514/pods/http:proxy-service-v249r-q9bjc:162/proxy/: bar (200; 29.846503ms)
Feb 22 14:20:16.853: INFO: (8) /api/v1/namespaces/proxy-8514/pods/proxy-service-v249r-q9bjc/proxy/: test (200; 29.841422ms)
Feb 22 14:20:16.855: INFO: (8) /api/v1/namespaces/proxy-8514/pods/proxy-service-v249r-q9bjc:1080/proxy/: test<... (200; 31.578238ms)
Feb 22 14:20:16.856: INFO: (8) /api/v1/namespaces/proxy-8514/pods/http:proxy-service-v249r-q9bjc:160/proxy/: foo (200; 32.405348ms)
Feb 22 14:20:16.857: INFO: (8) /api/v1/namespaces/proxy-8514/pods/http:proxy-service-v249r-q9bjc:1080/proxy/: ... (200; 32.8719ms)
Feb 22 14:20:16.858: INFO: (8) /api/v1/namespaces/proxy-8514/services/proxy-service-v249r:portname1/proxy/: foo (200; 34.203238ms)
Feb 22 14:20:16.859: INFO: (8) /api/v1/namespaces/proxy-8514/services/http:proxy-service-v249r:portname2/proxy/: bar (200; 34.99837ms)
Feb 22 14:20:16.862: INFO: (8) /api/v1/namespaces/proxy-8514/pods/https:proxy-service-v249r-q9bjc:460/proxy/: tls baz (200; 38.229893ms)
Feb 22 14:20:16.862: INFO: (8) /api/v1/namespaces/proxy-8514/pods/proxy-service-v249r-q9bjc:160/proxy/: foo (200; 38.063717ms)
Feb 22 14:20:16.864: INFO: (8) /api/v1/namespaces/proxy-8514/services/https:proxy-service-v249r:tlsportname1/proxy/: tls baz (200; 40.546454ms)
Feb 22 14:20:16.864: INFO: (8) /api/v1/namespaces/proxy-8514/services/http:proxy-service-v249r:portname1/proxy/: foo (200; 40.470911ms)
Feb 22 14:20:16.865: INFO: (8) /api/v1/namespaces/proxy-8514/pods/https:proxy-service-v249r-q9bjc:443/proxy/: test<... (200; 35.139637ms)
Feb 22 14:20:16.910: INFO: (9) /api/v1/namespaces/proxy-8514/pods/http:proxy-service-v249r-q9bjc:1080/proxy/: ... (200; 34.121588ms)
Feb 22 14:20:16.910: INFO: (9) /api/v1/namespaces/proxy-8514/pods/http:proxy-service-v249r-q9bjc:162/proxy/: bar (200; 33.175545ms)
Feb 22 14:20:16.912: INFO: (9) /api/v1/namespaces/proxy-8514/services/https:proxy-service-v249r:tlsportname1/proxy/: tls baz (200; 34.48033ms)
Feb 22 14:20:16.912: INFO: (9) /api/v1/namespaces/proxy-8514/pods/https:proxy-service-v249r-q9bjc:443/proxy/: test (200; 38.076972ms)
Feb 22 14:20:16.914: INFO: (9) /api/v1/namespaces/proxy-8514/pods/proxy-service-v249r-q9bjc:162/proxy/: bar (200; 38.753877ms)
Feb 22 14:20:16.927: INFO: (10) /api/v1/namespaces/proxy-8514/pods/proxy-service-v249r-q9bjc/proxy/: test (200; 12.52549ms)
Feb 22 14:20:16.927: INFO: (10) /api/v1/namespaces/proxy-8514/pods/https:proxy-service-v249r-q9bjc:443/proxy/: test<... (200; 14.051031ms)
Feb 22 14:20:16.932: INFO: (10) /api/v1/namespaces/proxy-8514/pods/http:proxy-service-v249r-q9bjc:1080/proxy/: ... (200; 17.313645ms)
Feb 22 14:20:16.935: INFO: (10) /api/v1/namespaces/proxy-8514/services/http:proxy-service-v249r:portname1/proxy/: foo (200; 20.447267ms)
Feb 22 14:20:16.935: INFO: (10) /api/v1/namespaces/proxy-8514/pods/proxy-service-v249r-q9bjc:162/proxy/: bar (200; 20.30206ms)
Feb 22 14:20:16.935: INFO: (10) /api/v1/namespaces/proxy-8514/services/proxy-service-v249r:portname2/proxy/: bar (200; 20.260907ms)
Feb 22 14:20:16.937: INFO: (10) /api/v1/namespaces/proxy-8514/pods/http:proxy-service-v249r-q9bjc:160/proxy/: foo (200; 21.800094ms)
Feb 22 14:20:16.937: INFO: (10) /api/v1/namespaces/proxy-8514/services/http:proxy-service-v249r:portname2/proxy/: bar (200; 21.866762ms)
Feb 22 14:20:16.937: INFO: (10) /api/v1/namespaces/proxy-8514/pods/proxy-service-v249r-q9bjc:160/proxy/: foo (200; 22.281414ms)
Feb 22 14:20:16.937: INFO: (10) /api/v1/namespaces/proxy-8514/services/https:proxy-service-v249r:tlsportname2/proxy/: tls qux (200; 22.609032ms)
Feb 22 14:20:16.941: INFO: (10) /api/v1/namespaces/proxy-8514/services/proxy-service-v249r:portname1/proxy/: foo (200; 26.631288ms)
Feb 22 14:20:16.941: INFO: (10) /api/v1/namespaces/proxy-8514/services/https:proxy-service-v249r:tlsportname1/proxy/: tls baz (200; 27.283755ms)
Feb 22 14:20:16.961: INFO: (11) /api/v1/namespaces/proxy-8514/pods/http:proxy-service-v249r-q9bjc:160/proxy/: foo (200; 18.628878ms)
Feb 22 14:20:16.964: INFO: (11) /api/v1/namespaces/proxy-8514/pods/http:proxy-service-v249r-q9bjc:1080/proxy/: ... (200; 22.052196ms)
Feb 22 14:20:16.965: INFO: (11) /api/v1/namespaces/proxy-8514/pods/proxy-service-v249r-q9bjc/proxy/: test (200; 22.930802ms)
Feb 22 14:20:16.966: INFO: (11) /api/v1/namespaces/proxy-8514/services/proxy-service-v249r:portname1/proxy/: foo (200; 24.004038ms)
Feb 22 14:20:16.966: INFO: (11) /api/v1/namespaces/proxy-8514/pods/proxy-service-v249r-q9bjc:1080/proxy/: test<... (200; 24.100051ms)
Feb 22 14:20:16.966: INFO: (11) /api/v1/namespaces/proxy-8514/pods/https:proxy-service-v249r-q9bjc:460/proxy/: tls baz (200; 24.007571ms)
Feb 22 14:20:16.966: INFO: (11) /api/v1/namespaces/proxy-8514/services/https:proxy-service-v249r:tlsportname1/proxy/: tls baz (200; 24.188993ms)
Feb 22 14:20:16.966: INFO: (11) /api/v1/namespaces/proxy-8514/pods/proxy-service-v249r-q9bjc:160/proxy/: foo (200; 24.456341ms)
Feb 22 14:20:16.967: INFO: (11) /api/v1/namespaces/proxy-8514/pods/http:proxy-service-v249r-q9bjc:162/proxy/: bar (200; 24.924585ms)
Feb 22 14:20:16.968: INFO: (11) /api/v1/namespaces/proxy-8514/pods/proxy-service-v249r-q9bjc:162/proxy/: bar (200; 26.132112ms)
Feb 22 14:20:16.969: INFO: (11) /api/v1/namespaces/proxy-8514/pods/https:proxy-service-v249r-q9bjc:462/proxy/: tls qux (200; 26.329656ms)
Feb 22 14:20:16.969: INFO: (11) /api/v1/namespaces/proxy-8514/pods/https:proxy-service-v249r-q9bjc:443/proxy/: test (200; 13.252404ms)
Feb 22 14:20:16.985: INFO: (12) /api/v1/namespaces/proxy-8514/pods/https:proxy-service-v249r-q9bjc:443/proxy/: test<... (200; 14.657183ms)
Feb 22 14:20:16.986: INFO: (12) /api/v1/namespaces/proxy-8514/pods/https:proxy-service-v249r-q9bjc:460/proxy/: tls baz (200; 14.283788ms)
Feb 22 14:20:16.986: INFO: (12) /api/v1/namespaces/proxy-8514/services/proxy-service-v249r:portname1/proxy/: foo (200; 14.319071ms)
Feb 22 14:20:16.986: INFO: (12) /api/v1/namespaces/proxy-8514/pods/https:proxy-service-v249r-q9bjc:462/proxy/: tls qux (200; 14.648377ms)
Feb 22 14:20:16.987: INFO: (12) /api/v1/namespaces/proxy-8514/services/http:proxy-service-v249r:portname1/proxy/: foo (200; 15.206921ms)
Feb 22 14:20:16.989: INFO: (12) /api/v1/namespaces/proxy-8514/services/http:proxy-service-v249r:portname2/proxy/: bar (200; 17.043656ms)
Feb 22 14:20:16.989: INFO: (12) /api/v1/namespaces/proxy-8514/services/https:proxy-service-v249r:tlsportname2/proxy/: tls qux (200; 17.028476ms)
Feb 22 14:20:16.989: INFO: (12) /api/v1/namespaces/proxy-8514/pods/http:proxy-service-v249r-q9bjc:1080/proxy/: ... (200; 17.363906ms)
Feb 22 14:20:16.989: INFO: (12) /api/v1/namespaces/proxy-8514/services/https:proxy-service-v249r:tlsportname1/proxy/: tls baz (200; 18.045556ms)
Feb 22 14:20:16.990: INFO: (12) /api/v1/namespaces/proxy-8514/services/proxy-service-v249r:portname2/proxy/: bar (200; 18.697287ms)
Feb 22 14:20:16.999: INFO: (13) /api/v1/namespaces/proxy-8514/pods/http:proxy-service-v249r-q9bjc:1080/proxy/: ... (200; 8.750466ms)
Feb 22 14:20:16.999: INFO: (13) /api/v1/namespaces/proxy-8514/pods/proxy-service-v249r-q9bjc:160/proxy/: foo (200; 8.963792ms)
Feb 22 14:20:16.999: INFO: (13) /api/v1/namespaces/proxy-8514/pods/https:proxy-service-v249r-q9bjc:460/proxy/: tls baz (200; 8.847734ms)
Feb 22 14:20:16.999: INFO: (13) /api/v1/namespaces/proxy-8514/pods/proxy-service-v249r-q9bjc:1080/proxy/: test<... (200; 9.100143ms)
Feb 22 14:20:16.999: INFO: (13) /api/v1/namespaces/proxy-8514/pods/https:proxy-service-v249r-q9bjc:443/proxy/: test (200; 9.289979ms)
Feb 22 14:20:16.999: INFO: (13) /api/v1/namespaces/proxy-8514/pods/https:proxy-service-v249r-q9bjc:462/proxy/: tls qux (200; 8.981588ms)
Feb 22 14:20:17.000: INFO: (13) /api/v1/namespaces/proxy-8514/pods/http:proxy-service-v249r-q9bjc:160/proxy/: foo (200; 9.411039ms)
Feb 22 14:20:17.000: INFO: (13) /api/v1/namespaces/proxy-8514/pods/http:proxy-service-v249r-q9bjc:162/proxy/: bar (200; 9.457706ms)
Feb 22 14:20:17.000: INFO: (13) /api/v1/namespaces/proxy-8514/pods/proxy-service-v249r-q9bjc:162/proxy/: bar (200; 9.127969ms)
Feb 22 14:20:17.003: INFO: (13) /api/v1/namespaces/proxy-8514/services/http:proxy-service-v249r:portname2/proxy/: bar (200; 12.552665ms)
Feb 22 14:20:17.004: INFO: (13) /api/v1/namespaces/proxy-8514/services/http:proxy-service-v249r:portname1/proxy/: foo (200; 13.785318ms)
Feb 22 14:20:17.004: INFO: (13) /api/v1/namespaces/proxy-8514/services/proxy-service-v249r:portname2/proxy/: bar (200; 13.560628ms)
Feb 22 14:20:17.004: INFO: (13) /api/v1/namespaces/proxy-8514/services/https:proxy-service-v249r:tlsportname1/proxy/: tls baz (200; 13.708282ms)
Feb 22 14:20:17.004: INFO: (13) /api/v1/namespaces/proxy-8514/services/https:proxy-service-v249r:tlsportname2/proxy/: tls qux (200; 13.676667ms)
Feb 22 14:20:17.005: INFO: (13) /api/v1/namespaces/proxy-8514/services/proxy-service-v249r:portname1/proxy/: foo (200; 14.77072ms)
Feb 22 14:20:17.017: INFO: (14) /api/v1/namespaces/proxy-8514/pods/proxy-service-v249r-q9bjc:1080/proxy/: test<... (200; 11.981648ms)
Feb 22 14:20:17.021: INFO: (14) /api/v1/namespaces/proxy-8514/pods/proxy-service-v249r-q9bjc/proxy/: test (200; 15.728981ms)
Feb 22 14:20:17.021: INFO: (14) /api/v1/namespaces/proxy-8514/pods/https:proxy-service-v249r-q9bjc:462/proxy/: tls qux (200; 16.035602ms)
Feb 22 14:20:17.022: INFO: (14) /api/v1/namespaces/proxy-8514/pods/http:proxy-service-v249r-q9bjc:1080/proxy/: ... (200; 16.441535ms)
Feb 22 14:20:17.022: INFO: (14) /api/v1/namespaces/proxy-8514/pods/https:proxy-service-v249r-q9bjc:443/proxy/: test (200; 18.652136ms)
Feb 22 14:20:17.060: INFO: (15) /api/v1/namespaces/proxy-8514/pods/http:proxy-service-v249r-q9bjc:1080/proxy/: ... (200; 20.628203ms)
Feb 22 14:20:17.060: INFO: (15) /api/v1/namespaces/proxy-8514/services/proxy-service-v249r:portname2/proxy/: bar (200; 20.91392ms)
Feb 22 14:20:17.061: INFO: (15) /api/v1/namespaces/proxy-8514/services/proxy-service-v249r:portname1/proxy/: foo (200; 19.720841ms)
Feb 22 14:20:17.061: INFO: (15) /api/v1/namespaces/proxy-8514/pods/https:proxy-service-v249r-q9bjc:460/proxy/: tls baz (200; 20.725367ms)
Feb 22 14:20:17.062: INFO: (15) /api/v1/namespaces/proxy-8514/pods/https:proxy-service-v249r-q9bjc:462/proxy/: tls qux (200; 23.142231ms)
Feb 22 14:20:17.062: INFO: (15) /api/v1/namespaces/proxy-8514/services/https:proxy-service-v249r:tlsportname1/proxy/: tls baz (200; 21.601877ms)
Feb 22 14:20:17.062: INFO: (15) /api/v1/namespaces/proxy-8514/services/https:proxy-service-v249r:tlsportname2/proxy/: tls qux (200; 21.615286ms)
Feb 22 14:20:17.062: INFO: (15) /api/v1/namespaces/proxy-8514/pods/http:proxy-service-v249r-q9bjc:160/proxy/: foo (200; 21.527857ms)
Feb 22 14:20:17.062: INFO: (15) /api/v1/namespaces/proxy-8514/pods/proxy-service-v249r-q9bjc:1080/proxy/: test<... (200; 20.798392ms)
Feb 22 14:20:17.063: INFO: (15) /api/v1/namespaces/proxy-8514/pods/http:proxy-service-v249r-q9bjc:162/proxy/: bar (200; 21.319482ms)
Feb 22 14:20:17.063: INFO: (15) /api/v1/namespaces/proxy-8514/pods/https:proxy-service-v249r-q9bjc:443/proxy/: test (200; 43.691282ms)
Feb 22 14:20:17.112: INFO: (16) /api/v1/namespaces/proxy-8514/pods/proxy-service-v249r-q9bjc:162/proxy/: bar (200; 43.945363ms)
Feb 22 14:20:17.112: INFO: (16) /api/v1/namespaces/proxy-8514/pods/proxy-service-v249r-q9bjc:160/proxy/: foo (200; 44.07162ms)
Feb 22 14:20:17.112: INFO: (16) /api/v1/namespaces/proxy-8514/pods/http:proxy-service-v249r-q9bjc:1080/proxy/: ... (200; 44.581743ms)
Feb 22 14:20:17.112: INFO: (16) /api/v1/namespaces/proxy-8514/pods/https:proxy-service-v249r-q9bjc:460/proxy/: tls baz (200; 44.61593ms)
Feb 22 14:20:17.112: INFO: (16) /api/v1/namespaces/proxy-8514/pods/https:proxy-service-v249r-q9bjc:443/proxy/: test<... (200; 44.376849ms)
Feb 22 14:20:17.115: INFO: (16) /api/v1/namespaces/proxy-8514/services/http:proxy-service-v249r:portname2/proxy/: bar (200; 47.524573ms)
Feb 22 14:20:17.116: INFO: (16) /api/v1/namespaces/proxy-8514/services/https:proxy-service-v249r:tlsportname1/proxy/: tls baz (200; 48.247406ms)
Feb 22 14:20:17.116: INFO: (16) /api/v1/namespaces/proxy-8514/pods/http:proxy-service-v249r-q9bjc:162/proxy/: bar (200; 48.234767ms)
Feb 22 14:20:17.118: INFO: (16) /api/v1/namespaces/proxy-8514/services/http:proxy-service-v249r:portname1/proxy/: foo (200; 50.003363ms)
Feb 22 14:20:17.118: INFO: (16) /api/v1/namespaces/proxy-8514/services/proxy-service-v249r:portname1/proxy/: foo (200; 50.168099ms)
Feb 22 14:20:17.119: INFO: (16) /api/v1/namespaces/proxy-8514/services/https:proxy-service-v249r:tlsportname2/proxy/: tls qux (200; 50.949802ms)
Feb 22 14:20:17.119: INFO: (16) /api/v1/namespaces/proxy-8514/pods/https:proxy-service-v249r-q9bjc:462/proxy/: tls qux (200; 52.008762ms)
Feb 22 14:20:17.121: INFO: (16) /api/v1/namespaces/proxy-8514/services/proxy-service-v249r:portname2/proxy/: bar (200; 54.29363ms)
Feb 22 14:20:17.135: INFO: (17) /api/v1/namespaces/proxy-8514/pods/https:proxy-service-v249r-q9bjc:443/proxy/: test (200; 16.233351ms)
Feb 22 14:20:17.139: INFO: (17) /api/v1/namespaces/proxy-8514/pods/https:proxy-service-v249r-q9bjc:462/proxy/: tls qux (200; 16.881771ms)
Feb 22 14:20:17.139: INFO: (17) /api/v1/namespaces/proxy-8514/pods/http:proxy-service-v249r-q9bjc:1080/proxy/: ... (200; 16.350685ms)
Feb 22 14:20:17.139: INFO: (17) /api/v1/namespaces/proxy-8514/pods/proxy-service-v249r-q9bjc:1080/proxy/: test<... (200; 16.220806ms)
Feb 22 14:20:17.150: INFO: (18) /api/v1/namespaces/proxy-8514/pods/https:proxy-service-v249r-q9bjc:462/proxy/: tls qux (200; 10.620867ms)
Feb 22 14:20:17.150: INFO: (18) /api/v1/namespaces/proxy-8514/pods/proxy-service-v249r-q9bjc:160/proxy/: foo (200; 11.286335ms)
Feb 22 14:20:17.150: INFO: (18) /api/v1/namespaces/proxy-8514/pods/proxy-service-v249r-q9bjc:162/proxy/: bar (200; 11.058235ms)
Feb 22 14:20:17.151: INFO: (18) /api/v1/namespaces/proxy-8514/pods/https:proxy-service-v249r-q9bjc:460/proxy/: tls baz (200; 12.206959ms)
Feb 22 14:20:17.151: INFO: (18) /api/v1/namespaces/proxy-8514/pods/proxy-service-v249r-q9bjc/proxy/: test (200; 12.407431ms)
Feb 22 14:20:17.152: INFO: (18) /api/v1/namespaces/proxy-8514/pods/http:proxy-service-v249r-q9bjc:160/proxy/: foo (200; 12.822566ms)
Feb 22 14:20:17.152: INFO: (18) /api/v1/namespaces/proxy-8514/services/proxy-service-v249r:portname1/proxy/: foo (200; 13.058379ms)
Feb 22 14:20:17.152: INFO: (18) /api/v1/namespaces/proxy-8514/pods/http:proxy-service-v249r-q9bjc:162/proxy/: bar (200; 13.341777ms)
Feb 22 14:20:17.152: INFO: (18) /api/v1/namespaces/proxy-8514/pods/proxy-service-v249r-q9bjc:1080/proxy/: test<... (200; 13.580725ms)
Feb 22 14:20:17.152: INFO: (18) /api/v1/namespaces/proxy-8514/pods/http:proxy-service-v249r-q9bjc:1080/proxy/: ... (200; 13.39725ms)
Feb 22 14:20:17.152: INFO: (18) /api/v1/namespaces/proxy-8514/pods/https:proxy-service-v249r-q9bjc:443/proxy/: ... (200; 12.388929ms)
Feb 22 14:20:17.169: INFO: (19) /api/v1/namespaces/proxy-8514/pods/proxy-service-v249r-q9bjc:1080/proxy/: test<... (200; 13.379765ms)
Feb 22 14:20:17.169: INFO: (19) /api/v1/namespaces/proxy-8514/pods/http:proxy-service-v249r-q9bjc:162/proxy/: bar (200; 13.363563ms)
Feb 22 14:20:17.169: INFO: (19) /api/v1/namespaces/proxy-8514/pods/proxy-service-v249r-q9bjc:162/proxy/: bar (200; 13.406528ms)
Feb 22 14:20:17.169: INFO: (19) /api/v1/namespaces/proxy-8514/pods/proxy-service-v249r-q9bjc/proxy/: test (200; 13.407461ms)
Feb 22 14:20:17.170: INFO: (19) /api/v1/namespaces/proxy-8514/services/https:proxy-service-v249r:tlsportname1/proxy/: tls baz (200; 13.992214ms)
Feb 22 14:20:17.170: INFO: (19) /api/v1/namespaces/proxy-8514/pods/http:proxy-service-v249r-q9bjc:160/proxy/: foo (200; 14.031193ms)
Feb 22 14:20:17.170: INFO: (19) /api/v1/namespaces/proxy-8514/pods/https:proxy-service-v249r-q9bjc:462/proxy/: tls qux (200; 13.962839ms)
Feb 22 14:20:17.170: INFO: (19) /api/v1/namespaces/proxy-8514/services/https:proxy-service-v249r:tlsportname2/proxy/: tls qux (200; 14.034684ms)
Feb 22 14:20:17.170: INFO: (19) /api/v1/namespaces/proxy-8514/pods/https:proxy-service-v249r-q9bjc:443/proxy/: >> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap that has name configmap-test-emptyKey-7559e65a-33ec-44c1-a31a-6fd53572282a
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 14:20:32.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4807" for this suite.
Feb 22 14:20:38.874: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 14:20:39.012: INFO: namespace configmap-4807 deletion completed in 6.167885044s

• [SLOW TEST:6.271 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 14:20:39.013: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 22 14:20:39.120: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Feb 22 14:20:39.130: INFO: Number of nodes with available pods: 0
Feb 22 14:20:39.130: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Feb 22 14:20:39.169: INFO: Number of nodes with available pods: 0
Feb 22 14:20:39.170: INFO: Node iruya-node is running more than one daemon pod
Feb 22 14:20:40.182: INFO: Number of nodes with available pods: 0
Feb 22 14:20:40.182: INFO: Node iruya-node is running more than one daemon pod
Feb 22 14:20:41.180: INFO: Number of nodes with available pods: 0
Feb 22 14:20:41.181: INFO: Node iruya-node is running more than one daemon pod
Feb 22 14:20:42.188: INFO: Number of nodes with available pods: 0
Feb 22 14:20:42.189: INFO: Node iruya-node is running more than one daemon pod
Feb 22 14:20:43.180: INFO: Number of nodes with available pods: 0
Feb 22 14:20:43.180: INFO: Node iruya-node is running more than one daemon pod
Feb 22 14:20:44.176: INFO: Number of nodes with available pods: 0
Feb 22 14:20:44.176: INFO: Node iruya-node is running more than one daemon pod
Feb 22 14:20:45.177: INFO: Number of nodes with available pods: 0
Feb 22 14:20:45.177: INFO: Node iruya-node is running more than one daemon pod
Feb 22 14:20:46.177: INFO: Number of nodes with available pods: 1
Feb 22 14:20:46.177: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Feb 22 14:20:46.219: INFO: Number of nodes with available pods: 1
Feb 22 14:20:46.219: INFO: Number of running nodes: 0, number of available pods: 1
Feb 22 14:20:47.228: INFO: Number of nodes with available pods: 0
Feb 22 14:20:47.228: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Feb 22 14:20:47.256: INFO: Number of nodes with available pods: 0
Feb 22 14:20:47.256: INFO: Node iruya-node is running more than one daemon pod
Feb 22 14:20:48.265: INFO: Number of nodes with available pods: 0
Feb 22 14:20:48.265: INFO: Node iruya-node is running more than one daemon pod
Feb 22 14:20:49.264: INFO: Number of nodes with available pods: 0
Feb 22 14:20:49.264: INFO: Node iruya-node is running more than one daemon pod
Feb 22 14:20:50.263: INFO: Number of nodes with available pods: 0
Feb 22 14:20:50.263: INFO: Node iruya-node is running more than one daemon pod
Feb 22 14:20:51.266: INFO: Number of nodes with available pods: 0
Feb 22 14:20:51.266: INFO: Node iruya-node is running more than one daemon pod
Feb 22 14:20:52.264: INFO: Number of nodes with available pods: 0
Feb 22 14:20:52.264: INFO: Node iruya-node is running more than one daemon pod
Feb 22 14:20:53.269: INFO: Number of nodes with available pods: 0
Feb 22 14:20:53.269: INFO: Node iruya-node is running more than one daemon pod
Feb 22 14:20:54.278: INFO: Number of nodes with available pods: 0
Feb 22 14:20:54.278: INFO: Node iruya-node is running more than one daemon pod
Feb 22 14:20:55.265: INFO: Number of nodes with available pods: 0
Feb 22 14:20:55.265: INFO: Node iruya-node is running more than one daemon pod
Feb 22 14:20:56.265: INFO: Number of nodes with available pods: 0
Feb 22 14:20:56.265: INFO: Node iruya-node is running more than one daemon pod
Feb 22 14:20:57.270: INFO: Number of nodes with available pods: 0
Feb 22 14:20:57.270: INFO: Node iruya-node is running more than one daemon pod
Feb 22 14:20:58.264: INFO: Number of nodes with available pods: 0
Feb 22 14:20:58.264: INFO: Node iruya-node is running more than one daemon pod
Feb 22 14:20:59.266: INFO: Number of nodes with available pods: 0
Feb 22 14:20:59.266: INFO: Node iruya-node is running more than one daemon pod
Feb 22 14:21:00.270: INFO: Number of nodes with available pods: 0
Feb 22 14:21:00.270: INFO: Node iruya-node is running more than one daemon pod
Feb 22 14:21:01.272: INFO: Number of nodes with available pods: 0
Feb 22 14:21:01.272: INFO: Node iruya-node is running more than one daemon pod
Feb 22 14:21:02.262: INFO: Number of nodes with available pods: 0
Feb 22 14:21:02.262: INFO: Node iruya-node is running more than one daemon pod
Feb 22 14:21:03.267: INFO: Number of nodes with available pods: 0
Feb 22 14:21:03.267: INFO: Node iruya-node is running more than one daemon pod
Feb 22 14:21:04.263: INFO: Number of nodes with available pods: 1
Feb 22 14:21:04.263: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7657, will wait for the garbage collector to delete the pods
Feb 22 14:21:04.431: INFO: Deleting DaemonSet.extensions daemon-set took: 15.78225ms
Feb 22 14:21:04.732: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.667871ms
Feb 22 14:21:11.338: INFO: Number of nodes with available pods: 0
Feb 22 14:21:11.338: INFO: Number of running nodes: 0, number of available pods: 0
Feb 22 14:21:11.343: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7657/daemonsets","resourceVersion":"25335147"},"items":null}

Feb 22 14:21:11.346: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7657/pods","resourceVersion":"25335147"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 14:21:11.405: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-7657" for this suite.
Feb 22 14:21:17.498: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 14:21:17.604: INFO: namespace daemonsets-7657 deletion completed in 6.189713272s

• [SLOW TEST:38.591 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 14:21:17.604: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7153.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7153.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb 22 14:21:29.860: INFO: Unable to read wheezy_udp@kubernetes.default.svc.cluster.local from pod dns-7153/dns-test-27b42067-f68c-4bfc-b5b8-79ecc2ae6a00: the server could not find the requested resource (get pods dns-test-27b42067-f68c-4bfc-b5b8-79ecc2ae6a00)
Feb 22 14:21:29.879: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-7153/dns-test-27b42067-f68c-4bfc-b5b8-79ecc2ae6a00: the server could not find the requested resource (get pods dns-test-27b42067-f68c-4bfc-b5b8-79ecc2ae6a00)
Feb 22 14:21:29.890: INFO: Unable to read wheezy_udp@PodARecord from pod dns-7153/dns-test-27b42067-f68c-4bfc-b5b8-79ecc2ae6a00: the server could not find the requested resource (get pods dns-test-27b42067-f68c-4bfc-b5b8-79ecc2ae6a00)
Feb 22 14:21:29.899: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-7153/dns-test-27b42067-f68c-4bfc-b5b8-79ecc2ae6a00: the server could not find the requested resource (get pods dns-test-27b42067-f68c-4bfc-b5b8-79ecc2ae6a00)
Feb 22 14:21:29.904: INFO: Unable to read jessie_udp@kubernetes.default.svc.cluster.local from pod dns-7153/dns-test-27b42067-f68c-4bfc-b5b8-79ecc2ae6a00: the server could not find the requested resource (get pods dns-test-27b42067-f68c-4bfc-b5b8-79ecc2ae6a00)
Feb 22 14:21:29.910: INFO: Unable to read jessie_tcp@kubernetes.default.svc.cluster.local from pod dns-7153/dns-test-27b42067-f68c-4bfc-b5b8-79ecc2ae6a00: the server could not find the requested resource (get pods dns-test-27b42067-f68c-4bfc-b5b8-79ecc2ae6a00)
Feb 22 14:21:29.921: INFO: Unable to read jessie_udp@PodARecord from pod dns-7153/dns-test-27b42067-f68c-4bfc-b5b8-79ecc2ae6a00: the server could not find the requested resource (get pods dns-test-27b42067-f68c-4bfc-b5b8-79ecc2ae6a00)
Feb 22 14:21:29.930: INFO: Unable to read jessie_tcp@PodARecord from pod dns-7153/dns-test-27b42067-f68c-4bfc-b5b8-79ecc2ae6a00: the server could not find the requested resource (get pods dns-test-27b42067-f68c-4bfc-b5b8-79ecc2ae6a00)
Feb 22 14:21:29.930: INFO: Lookups using dns-7153/dns-test-27b42067-f68c-4bfc-b5b8-79ecc2ae6a00 failed for: [wheezy_udp@kubernetes.default.svc.cluster.local wheezy_tcp@kubernetes.default.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@kubernetes.default.svc.cluster.local jessie_tcp@kubernetes.default.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord]

Feb 22 14:21:35.006: INFO: DNS probes using dns-7153/dns-test-27b42067-f68c-4bfc-b5b8-79ecc2ae6a00 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 14:21:35.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-7153" for this suite.
Feb 22 14:21:43.535: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 14:21:43.734: INFO: namespace dns-7153 deletion completed in 8.388513137s

• [SLOW TEST:26.129 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 14:21:43.734: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Feb 22 14:21:52.431: INFO: Successfully updated pod "pod-update-30ed353f-4dec-419a-9e64-c045e19fa390"
STEP: verifying the updated pod is in kubernetes
Feb 22 14:21:52.488: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 14:21:52.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-9170" for this suite.
Feb 22 14:22:14.529: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 14:22:14.616: INFO: namespace pods-9170 deletion completed in 22.12016847s

• [SLOW TEST:30.882 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 14:22:14.617: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Feb 22 14:22:23.340: INFO: Successfully updated pod "labelsupdate2dd679d6-d4e4-4ba7-a35b-a363e721eef7"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 14:22:25.431: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2495" for this suite.
Feb 22 14:22:47.479: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 14:22:47.722: INFO: namespace downward-api-2495 deletion completed in 22.282866731s

• [SLOW TEST:33.106 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 14:22:47.723: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Feb 22 14:22:47.885: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-4179,SelfLink:/api/v1/namespaces/watch-4179/configmaps/e2e-watch-test-label-changed,UID:0e228626-ecdc-4b05-919e-49d4504e64db,ResourceVersion:25335395,Generation:0,CreationTimestamp:2020-02-22 14:22:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb 22 14:22:47.886: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-4179,SelfLink:/api/v1/namespaces/watch-4179/configmaps/e2e-watch-test-label-changed,UID:0e228626-ecdc-4b05-919e-49d4504e64db,ResourceVersion:25335396,Generation:0,CreationTimestamp:2020-02-22 14:22:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Feb 22 14:22:47.886: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-4179,SelfLink:/api/v1/namespaces/watch-4179/configmaps/e2e-watch-test-label-changed,UID:0e228626-ecdc-4b05-919e-49d4504e64db,ResourceVersion:25335397,Generation:0,CreationTimestamp:2020-02-22 14:22:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Feb 22 14:22:57.961: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-4179,SelfLink:/api/v1/namespaces/watch-4179/configmaps/e2e-watch-test-label-changed,UID:0e228626-ecdc-4b05-919e-49d4504e64db,ResourceVersion:25335413,Generation:0,CreationTimestamp:2020-02-22 14:22:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb 22 14:22:57.961: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-4179,SelfLink:/api/v1/namespaces/watch-4179/configmaps/e2e-watch-test-label-changed,UID:0e228626-ecdc-4b05-919e-49d4504e64db,ResourceVersion:25335414,Generation:0,CreationTimestamp:2020-02-22 14:22:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
Feb 22 14:22:57.961: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-4179,SelfLink:/api/v1/namespaces/watch-4179/configmaps/e2e-watch-test-label-changed,UID:0e228626-ecdc-4b05-919e-49d4504e64db,ResourceVersion:25335415,Generation:0,CreationTimestamp:2020-02-22 14:22:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 14:22:57.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-4179" for this suite.
Feb 22 14:23:03.999: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 14:23:04.113: INFO: namespace watch-4179 deletion completed in 6.143568398s

• [SLOW TEST:16.390 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 14:23:04.113: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating replication controller my-hostname-basic-55e3965d-dbc6-4bec-8eaa-bf27c634237b
Feb 22 14:23:04.215: INFO: Pod name my-hostname-basic-55e3965d-dbc6-4bec-8eaa-bf27c634237b: Found 0 pods out of 1
Feb 22 14:23:09.264: INFO: Pod name my-hostname-basic-55e3965d-dbc6-4bec-8eaa-bf27c634237b: Found 1 pods out of 1
Feb 22 14:23:09.264: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-55e3965d-dbc6-4bec-8eaa-bf27c634237b" are running
Feb 22 14:23:13.288: INFO: Pod "my-hostname-basic-55e3965d-dbc6-4bec-8eaa-bf27c634237b-dwshq" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-22 14:23:04 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-22 14:23:04 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-55e3965d-dbc6-4bec-8eaa-bf27c634237b]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-22 14:23:04 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-55e3965d-dbc6-4bec-8eaa-bf27c634237b]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-22 14:23:04 +0000 UTC Reason: Message:}])
Feb 22 14:23:13.288: INFO: Trying to dial the pod
Feb 22 14:23:18.355: INFO: Controller my-hostname-basic-55e3965d-dbc6-4bec-8eaa-bf27c634237b: Got expected result from replica 1 [my-hostname-basic-55e3965d-dbc6-4bec-8eaa-bf27c634237b-dwshq]: "my-hostname-basic-55e3965d-dbc6-4bec-8eaa-bf27c634237b-dwshq", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 14:23:18.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-9387" for this suite.
Feb 22 14:23:24.388: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 14:23:24.512: INFO: namespace replication-controller-9387 deletion completed in 6.146267502s

• [SLOW TEST:20.399 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 14:23:24.513: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 22 14:23:24.628: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bcfe9683-130f-47a2-af19-ef4dfdcf4900" in namespace "projected-9184" to be "success or failure"
Feb 22 14:23:24.632: INFO: Pod "downwardapi-volume-bcfe9683-130f-47a2-af19-ef4dfdcf4900": Phase="Pending", Reason="", readiness=false. Elapsed: 3.547404ms
Feb 22 14:23:26.641: INFO: Pod "downwardapi-volume-bcfe9683-130f-47a2-af19-ef4dfdcf4900": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012784317s
Feb 22 14:23:28.654: INFO: Pod "downwardapi-volume-bcfe9683-130f-47a2-af19-ef4dfdcf4900": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025933823s
Feb 22 14:23:30.666: INFO: Pod "downwardapi-volume-bcfe9683-130f-47a2-af19-ef4dfdcf4900": Phase="Pending", Reason="", readiness=false. Elapsed: 6.037525363s
Feb 22 14:23:32.685: INFO: Pod "downwardapi-volume-bcfe9683-130f-47a2-af19-ef4dfdcf4900": Phase="Pending", Reason="", readiness=false. Elapsed: 8.056903944s
Feb 22 14:23:34.702: INFO: Pod "downwardapi-volume-bcfe9683-130f-47a2-af19-ef4dfdcf4900": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.073727098s
STEP: Saw pod success
Feb 22 14:23:34.702: INFO: Pod "downwardapi-volume-bcfe9683-130f-47a2-af19-ef4dfdcf4900" satisfied condition "success or failure"
Feb 22 14:23:34.708: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-bcfe9683-130f-47a2-af19-ef4dfdcf4900 container client-container: 
STEP: delete the pod
Feb 22 14:23:34.822: INFO: Waiting for pod downwardapi-volume-bcfe9683-130f-47a2-af19-ef4dfdcf4900 to disappear
Feb 22 14:23:34.826: INFO: Pod downwardapi-volume-bcfe9683-130f-47a2-af19-ef4dfdcf4900 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 14:23:34.827: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9184" for this suite.
Feb 22 14:23:40.861: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 14:23:41.008: INFO: namespace projected-9184 deletion completed in 6.172567683s

• [SLOW TEST:16.496 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 14:23:41.009: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-9007.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-9007.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9007.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-9007.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-9007.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9007.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe /etc/hosts
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb 22 14:23:57.357: INFO: Unable to read wheezy_udp@PodARecord from pod dns-9007/dns-test-2b21e825-e8c5-4450-9a52-74ce55aea2f8: the server could not find the requested resource (get pods dns-test-2b21e825-e8c5-4450-9a52-74ce55aea2f8)
Feb 22 14:23:57.361: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-9007/dns-test-2b21e825-e8c5-4450-9a52-74ce55aea2f8: the server could not find the requested resource (get pods dns-test-2b21e825-e8c5-4450-9a52-74ce55aea2f8)
Feb 22 14:23:57.366: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.dns-9007.svc.cluster.local from pod dns-9007/dns-test-2b21e825-e8c5-4450-9a52-74ce55aea2f8: the server could not find the requested resource (get pods dns-test-2b21e825-e8c5-4450-9a52-74ce55aea2f8)
Feb 22 14:23:57.371: INFO: Unable to read jessie_hosts@dns-querier-1 from pod dns-9007/dns-test-2b21e825-e8c5-4450-9a52-74ce55aea2f8: the server could not find the requested resource (get pods dns-test-2b21e825-e8c5-4450-9a52-74ce55aea2f8)
Feb 22 14:23:57.375: INFO: Unable to read jessie_udp@PodARecord from pod dns-9007/dns-test-2b21e825-e8c5-4450-9a52-74ce55aea2f8: the server could not find the requested resource (get pods dns-test-2b21e825-e8c5-4450-9a52-74ce55aea2f8)
Feb 22 14:23:57.388: INFO: Unable to read jessie_tcp@PodARecord from pod dns-9007/dns-test-2b21e825-e8c5-4450-9a52-74ce55aea2f8: the server could not find the requested resource (get pods dns-test-2b21e825-e8c5-4450-9a52-74ce55aea2f8)
Feb 22 14:23:57.388: INFO: Lookups using dns-9007/dns-test-2b21e825-e8c5-4450-9a52-74ce55aea2f8 failed for: [wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_hosts@dns-querier-1.dns-test-service.dns-9007.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord]

Feb 22 14:24:02.450: INFO: DNS probes using dns-9007/dns-test-2b21e825-e8c5-4450-9a52-74ce55aea2f8 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 14:24:02.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-9007" for this suite.
Feb 22 14:24:08.550: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 14:24:08.727: INFO: namespace dns-9007 deletion completed in 6.206211867s

• [SLOW TEST:27.718 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 14:24:08.728: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 22 14:24:08.809: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a0a9086c-6663-4132-a6b9-437149d56a1a" in namespace "downward-api-1605" to be "success or failure"
Feb 22 14:24:08.887: INFO: Pod "downwardapi-volume-a0a9086c-6663-4132-a6b9-437149d56a1a": Phase="Pending", Reason="", readiness=false. Elapsed: 77.889932ms
Feb 22 14:24:10.897: INFO: Pod "downwardapi-volume-a0a9086c-6663-4132-a6b9-437149d56a1a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.087294102s
Feb 22 14:24:12.905: INFO: Pod "downwardapi-volume-a0a9086c-6663-4132-a6b9-437149d56a1a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.095363125s
Feb 22 14:24:14.912: INFO: Pod "downwardapi-volume-a0a9086c-6663-4132-a6b9-437149d56a1a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.102653104s
Feb 22 14:24:16.921: INFO: Pod "downwardapi-volume-a0a9086c-6663-4132-a6b9-437149d56a1a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.111975087s
Feb 22 14:24:18.930: INFO: Pod "downwardapi-volume-a0a9086c-6663-4132-a6b9-437149d56a1a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.120746422s
STEP: Saw pod success
Feb 22 14:24:18.930: INFO: Pod "downwardapi-volume-a0a9086c-6663-4132-a6b9-437149d56a1a" satisfied condition "success or failure"
Feb 22 14:24:18.935: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-a0a9086c-6663-4132-a6b9-437149d56a1a container client-container: 
STEP: delete the pod
Feb 22 14:24:19.034: INFO: Waiting for pod downwardapi-volume-a0a9086c-6663-4132-a6b9-437149d56a1a to disappear
Feb 22 14:24:19.123: INFO: Pod downwardapi-volume-a0a9086c-6663-4132-a6b9-437149d56a1a no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 14:24:19.124: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1605" for this suite.
Feb 22 14:24:25.162: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 14:24:25.259: INFO: namespace downward-api-1605 deletion completed in 6.12397059s

• [SLOW TEST:16.531 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 14:24:25.260: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 22 14:24:25.342: INFO: Waiting up to 5m0s for pod "downwardapi-volume-eac09c0e-d381-4b33-b88e-c103a00ba23d" in namespace "downward-api-6245" to be "success or failure"
Feb 22 14:24:25.347: INFO: Pod "downwardapi-volume-eac09c0e-d381-4b33-b88e-c103a00ba23d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.962953ms
Feb 22 14:24:27.358: INFO: Pod "downwardapi-volume-eac09c0e-d381-4b33-b88e-c103a00ba23d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015712263s
Feb 22 14:24:29.373: INFO: Pod "downwardapi-volume-eac09c0e-d381-4b33-b88e-c103a00ba23d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030156999s
Feb 22 14:24:31.422: INFO: Pod "downwardapi-volume-eac09c0e-d381-4b33-b88e-c103a00ba23d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.079053316s
Feb 22 14:24:33.440: INFO: Pod "downwardapi-volume-eac09c0e-d381-4b33-b88e-c103a00ba23d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.097262645s
Feb 22 14:24:35.456: INFO: Pod "downwardapi-volume-eac09c0e-d381-4b33-b88e-c103a00ba23d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.113705081s
STEP: Saw pod success
Feb 22 14:24:35.457: INFO: Pod "downwardapi-volume-eac09c0e-d381-4b33-b88e-c103a00ba23d" satisfied condition "success or failure"
Feb 22 14:24:35.463: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-eac09c0e-d381-4b33-b88e-c103a00ba23d container client-container: 
STEP: delete the pod
Feb 22 14:24:35.572: INFO: Waiting for pod downwardapi-volume-eac09c0e-d381-4b33-b88e-c103a00ba23d to disappear
Feb 22 14:24:35.592: INFO: Pod downwardapi-volume-eac09c0e-d381-4b33-b88e-c103a00ba23d no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 14:24:35.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6245" for this suite.
Feb 22 14:24:41.655: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 14:24:41.799: INFO: namespace downward-api-6245 deletion completed in 6.168536086s

• [SLOW TEST:16.540 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 14:24:41.801: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on tmpfs
Feb 22 14:24:41.951: INFO: Waiting up to 5m0s for pod "pod-2f29581e-5d40-41cb-a0d2-b0ac26f155ad" in namespace "emptydir-7582" to be "success or failure"
Feb 22 14:24:41.963: INFO: Pod "pod-2f29581e-5d40-41cb-a0d2-b0ac26f155ad": Phase="Pending", Reason="", readiness=false. Elapsed: 11.60911ms
Feb 22 14:24:43.970: INFO: Pod "pod-2f29581e-5d40-41cb-a0d2-b0ac26f155ad": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018293411s
Feb 22 14:24:45.990: INFO: Pod "pod-2f29581e-5d40-41cb-a0d2-b0ac26f155ad": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038695915s
Feb 22 14:24:48.013: INFO: Pod "pod-2f29581e-5d40-41cb-a0d2-b0ac26f155ad": Phase="Pending", Reason="", readiness=false. Elapsed: 6.061294628s
Feb 22 14:24:50.022: INFO: Pod "pod-2f29581e-5d40-41cb-a0d2-b0ac26f155ad": Phase="Pending", Reason="", readiness=false. Elapsed: 8.06988434s
Feb 22 14:24:52.035: INFO: Pod "pod-2f29581e-5d40-41cb-a0d2-b0ac26f155ad": Phase="Pending", Reason="", readiness=false. Elapsed: 10.083509169s
Feb 22 14:24:56.222: INFO: Pod "pod-2f29581e-5d40-41cb-a0d2-b0ac26f155ad": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.27016011s
STEP: Saw pod success
Feb 22 14:24:56.222: INFO: Pod "pod-2f29581e-5d40-41cb-a0d2-b0ac26f155ad" satisfied condition "success or failure"
Feb 22 14:24:56.249: INFO: Trying to get logs from node iruya-node pod pod-2f29581e-5d40-41cb-a0d2-b0ac26f155ad container test-container: 
STEP: delete the pod
Feb 22 14:24:56.402: INFO: Waiting for pod pod-2f29581e-5d40-41cb-a0d2-b0ac26f155ad to disappear
Feb 22 14:24:56.413: INFO: Pod pod-2f29581e-5d40-41cb-a0d2-b0ac26f155ad no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 14:24:56.413: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7582" for this suite.
Feb 22 14:25:02.501: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 14:25:02.570: INFO: namespace emptydir-7582 deletion completed in 6.148142208s

• [SLOW TEST:20.769 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 14:25:02.570: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test hostPath mode
Feb 22 14:25:02.689: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-9576" to be "success or failure"
Feb 22 14:25:02.701: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 11.748578ms
Feb 22 14:25:04.928: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.238739981s
Feb 22 14:25:06.934: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.24452776s
Feb 22 14:25:08.942: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.252243706s
Feb 22 14:25:10.950: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.260286398s
Feb 22 14:25:12.961: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 10.271314517s
Feb 22 14:25:14.967: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 12.27785068s
Feb 22 14:25:16.977: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 14.287919264s
Feb 22 14:25:18.986: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.29642193s
STEP: Saw pod success
Feb 22 14:25:18.986: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
Feb 22 14:25:18.994: INFO: Trying to get logs from node iruya-node pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Feb 22 14:25:19.085: INFO: Waiting for pod pod-host-path-test to disappear
Feb 22 14:25:19.124: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 14:25:19.124: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "hostpath-9576" for this suite.
Feb 22 14:25:26.055: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 14:25:26.153: INFO: namespace hostpath-9576 deletion completed in 7.021897005s

• [SLOW TEST:23.582 seconds]
[sig-storage] HostPath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 14:25:26.153: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 14:25:37.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-753" for this suite.
Feb 22 14:25:43.662: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 14:25:43.806: INFO: namespace emptydir-wrapper-753 deletion completed in 6.182325232s

• [SLOW TEST:17.653 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 14:25:43.807: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Feb 22 14:25:44.020: INFO: Number of nodes with available pods: 0
Feb 22 14:25:44.020: INFO: Node iruya-node is running more than one daemon pod
Feb 22 14:25:45.920: INFO: Number of nodes with available pods: 0
Feb 22 14:25:45.920: INFO: Node iruya-node is running more than one daemon pod
Feb 22 14:25:46.036: INFO: Number of nodes with available pods: 0
Feb 22 14:25:46.036: INFO: Node iruya-node is running more than one daemon pod
Feb 22 14:25:47.033: INFO: Number of nodes with available pods: 0
Feb 22 14:25:47.033: INFO: Node iruya-node is running more than one daemon pod
Feb 22 14:25:48.534: INFO: Number of nodes with available pods: 0
Feb 22 14:25:48.534: INFO: Node iruya-node is running more than one daemon pod
Feb 22 14:25:49.041: INFO: Number of nodes with available pods: 0
Feb 22 14:25:49.041: INFO: Node iruya-node is running more than one daemon pod
Feb 22 14:25:50.042: INFO: Number of nodes with available pods: 0
Feb 22 14:25:50.042: INFO: Node iruya-node is running more than one daemon pod
Feb 22 14:25:51.350: INFO: Number of nodes with available pods: 0
Feb 22 14:25:51.350: INFO: Node iruya-node is running more than one daemon pod
Feb 22 14:25:53.994: INFO: Number of nodes with available pods: 0
Feb 22 14:25:53.994: INFO: Node iruya-node is running more than one daemon pod
Feb 22 14:25:54.392: INFO: Number of nodes with available pods: 0
Feb 22 14:25:54.392: INFO: Node iruya-node is running more than one daemon pod
Feb 22 14:25:55.045: INFO: Number of nodes with available pods: 0
Feb 22 14:25:55.045: INFO: Node iruya-node is running more than one daemon pod
Feb 22 14:25:56.031: INFO: Number of nodes with available pods: 1
Feb 22 14:25:56.031: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb 22 14:25:57.033: INFO: Number of nodes with available pods: 2
Feb 22 14:25:57.033: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Stop a daemon pod, check that the daemon pod is revived.
Feb 22 14:25:57.105: INFO: Number of nodes with available pods: 1
Feb 22 14:25:57.105: INFO: Node iruya-node is running more than one daemon pod
Feb 22 14:25:58.232: INFO: Number of nodes with available pods: 1
Feb 22 14:25:58.232: INFO: Node iruya-node is running more than one daemon pod
Feb 22 14:25:59.145: INFO: Number of nodes with available pods: 1
Feb 22 14:25:59.146: INFO: Node iruya-node is running more than one daemon pod
Feb 22 14:26:00.116: INFO: Number of nodes with available pods: 1
Feb 22 14:26:00.116: INFO: Node iruya-node is running more than one daemon pod
Feb 22 14:26:01.117: INFO: Number of nodes with available pods: 1
Feb 22 14:26:01.117: INFO: Node iruya-node is running more than one daemon pod
Feb 22 14:26:02.115: INFO: Number of nodes with available pods: 1
Feb 22 14:26:02.115: INFO: Node iruya-node is running more than one daemon pod
Feb 22 14:26:03.116: INFO: Number of nodes with available pods: 1
Feb 22 14:26:03.116: INFO: Node iruya-node is running more than one daemon pod
Feb 22 14:26:04.118: INFO: Number of nodes with available pods: 1
Feb 22 14:26:04.118: INFO: Node iruya-node is running more than one daemon pod
Feb 22 14:26:05.121: INFO: Number of nodes with available pods: 1
Feb 22 14:26:05.121: INFO: Node iruya-node is running more than one daemon pod
Feb 22 14:26:06.129: INFO: Number of nodes with available pods: 1
Feb 22 14:26:06.129: INFO: Node iruya-node is running more than one daemon pod
Feb 22 14:26:07.119: INFO: Number of nodes with available pods: 1
Feb 22 14:26:07.119: INFO: Node iruya-node is running more than one daemon pod
Feb 22 14:26:08.120: INFO: Number of nodes with available pods: 1
Feb 22 14:26:08.120: INFO: Node iruya-node is running more than one daemon pod
Feb 22 14:26:09.125: INFO: Number of nodes with available pods: 1
Feb 22 14:26:09.125: INFO: Node iruya-node is running more than one daemon pod
Feb 22 14:26:10.114: INFO: Number of nodes with available pods: 1
Feb 22 14:26:10.114: INFO: Node iruya-node is running more than one daemon pod
Feb 22 14:26:11.120: INFO: Number of nodes with available pods: 1
Feb 22 14:26:11.120: INFO: Node iruya-node is running more than one daemon pod
Feb 22 14:26:12.120: INFO: Number of nodes with available pods: 1
Feb 22 14:26:12.120: INFO: Node iruya-node is running more than one daemon pod
Feb 22 14:26:13.122: INFO: Number of nodes with available pods: 1
Feb 22 14:26:13.122: INFO: Node iruya-node is running more than one daemon pod
Feb 22 14:26:14.118: INFO: Number of nodes with available pods: 2
Feb 22 14:26:14.118: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4516, will wait for the garbage collector to delete the pods
Feb 22 14:26:14.186: INFO: Deleting DaemonSet.extensions daemon-set took: 13.89223ms
Feb 22 14:26:14.487: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.724882ms
Feb 22 14:26:26.722: INFO: Number of nodes with available pods: 0
Feb 22 14:26:26.722: INFO: Number of running nodes: 0, number of available pods: 0
Feb 22 14:26:26.728: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4516/daemonsets","resourceVersion":"25335958"},"items":null}

Feb 22 14:26:26.739: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4516/pods","resourceVersion":"25335959"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 14:26:26.752: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-4516" for this suite.
Feb 22 14:26:32.781: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 14:26:32.981: INFO: namespace daemonsets-4516 deletion completed in 6.224487903s

• [SLOW TEST:49.175 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 14:26:32.982: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Feb 22 14:26:33.085: INFO: Waiting up to 5m0s for pod "downward-api-08cf4dce-1fd4-448b-bbc9-97c21f4c2113" in namespace "downward-api-1381" to be "success or failure"
Feb 22 14:26:33.119: INFO: Pod "downward-api-08cf4dce-1fd4-448b-bbc9-97c21f4c2113": Phase="Pending", Reason="", readiness=false. Elapsed: 33.850671ms
Feb 22 14:26:35.132: INFO: Pod "downward-api-08cf4dce-1fd4-448b-bbc9-97c21f4c2113": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04664682s
Feb 22 14:26:37.142: INFO: Pod "downward-api-08cf4dce-1fd4-448b-bbc9-97c21f4c2113": Phase="Pending", Reason="", readiness=false. Elapsed: 4.056762497s
Feb 22 14:26:39.151: INFO: Pod "downward-api-08cf4dce-1fd4-448b-bbc9-97c21f4c2113": Phase="Pending", Reason="", readiness=false. Elapsed: 6.065788727s
Feb 22 14:26:41.158: INFO: Pod "downward-api-08cf4dce-1fd4-448b-bbc9-97c21f4c2113": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.073166016s
STEP: Saw pod success
Feb 22 14:26:41.158: INFO: Pod "downward-api-08cf4dce-1fd4-448b-bbc9-97c21f4c2113" satisfied condition "success or failure"
Feb 22 14:26:41.164: INFO: Trying to get logs from node iruya-node pod downward-api-08cf4dce-1fd4-448b-bbc9-97c21f4c2113 container dapi-container: 
STEP: delete the pod
Feb 22 14:26:41.258: INFO: Waiting for pod downward-api-08cf4dce-1fd4-448b-bbc9-97c21f4c2113 to disappear
Feb 22 14:26:41.265: INFO: Pod downward-api-08cf4dce-1fd4-448b-bbc9-97c21f4c2113 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 14:26:41.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1381" for this suite.
Feb 22 14:26:47.355: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 14:26:47.494: INFO: namespace downward-api-1381 deletion completed in 6.210746856s

• [SLOW TEST:14.511 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 14:26:47.494: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on node default medium
Feb 22 14:26:47.616: INFO: Waiting up to 5m0s for pod "pod-523f4dd0-862c-4b41-bb96-4b6a36a68056" in namespace "emptydir-50" to be "success or failure"
Feb 22 14:26:47.620: INFO: Pod "pod-523f4dd0-862c-4b41-bb96-4b6a36a68056": Phase="Pending", Reason="", readiness=false. Elapsed: 4.237934ms
Feb 22 14:26:49.635: INFO: Pod "pod-523f4dd0-862c-4b41-bb96-4b6a36a68056": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019072352s
Feb 22 14:26:51.644: INFO: Pod "pod-523f4dd0-862c-4b41-bb96-4b6a36a68056": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028206346s
Feb 22 14:26:53.661: INFO: Pod "pod-523f4dd0-862c-4b41-bb96-4b6a36a68056": Phase="Pending", Reason="", readiness=false. Elapsed: 6.044964582s
Feb 22 14:26:55.669: INFO: Pod "pod-523f4dd0-862c-4b41-bb96-4b6a36a68056": Phase="Pending", Reason="", readiness=false. Elapsed: 8.052641204s
Feb 22 14:26:57.682: INFO: Pod "pod-523f4dd0-862c-4b41-bb96-4b6a36a68056": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.066469843s
STEP: Saw pod success
Feb 22 14:26:57.683: INFO: Pod "pod-523f4dd0-862c-4b41-bb96-4b6a36a68056" satisfied condition "success or failure"
Feb 22 14:26:57.688: INFO: Trying to get logs from node iruya-node pod pod-523f4dd0-862c-4b41-bb96-4b6a36a68056 container test-container: 
STEP: delete the pod
Feb 22 14:26:57.821: INFO: Waiting for pod pod-523f4dd0-862c-4b41-bb96-4b6a36a68056 to disappear
Feb 22 14:26:57.842: INFO: Pod pod-523f4dd0-862c-4b41-bb96-4b6a36a68056 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 14:26:57.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-50" for this suite.
Feb 22 14:27:03.883: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 14:27:03.972: INFO: namespace emptydir-50 deletion completed in 6.117055318s

• [SLOW TEST:16.478 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 14:27:03.974: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-configmap-zxbv
STEP: Creating a pod to test atomic-volume-subpath
Feb 22 14:27:04.109: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-zxbv" in namespace "subpath-4902" to be "success or failure"
Feb 22 14:27:04.231: INFO: Pod "pod-subpath-test-configmap-zxbv": Phase="Pending", Reason="", readiness=false. Elapsed: 121.796218ms
Feb 22 14:27:06.249: INFO: Pod "pod-subpath-test-configmap-zxbv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.139864455s
Feb 22 14:27:08.266: INFO: Pod "pod-subpath-test-configmap-zxbv": Phase="Pending", Reason="", readiness=false. Elapsed: 4.156969138s
Feb 22 14:27:10.313: INFO: Pod "pod-subpath-test-configmap-zxbv": Phase="Pending", Reason="", readiness=false. Elapsed: 6.203657316s
Feb 22 14:27:12.323: INFO: Pod "pod-subpath-test-configmap-zxbv": Phase="Running", Reason="", readiness=true. Elapsed: 8.213768505s
Feb 22 14:27:14.337: INFO: Pod "pod-subpath-test-configmap-zxbv": Phase="Running", Reason="", readiness=true. Elapsed: 10.2280921s
Feb 22 14:27:16.348: INFO: Pod "pod-subpath-test-configmap-zxbv": Phase="Running", Reason="", readiness=true. Elapsed: 12.239483235s
Feb 22 14:27:18.368: INFO: Pod "pod-subpath-test-configmap-zxbv": Phase="Running", Reason="", readiness=true. Elapsed: 14.259391075s
Feb 22 14:27:20.377: INFO: Pod "pod-subpath-test-configmap-zxbv": Phase="Running", Reason="", readiness=true. Elapsed: 16.267824399s
Feb 22 14:27:22.396: INFO: Pod "pod-subpath-test-configmap-zxbv": Phase="Running", Reason="", readiness=true. Elapsed: 18.287172805s
Feb 22 14:27:24.421: INFO: Pod "pod-subpath-test-configmap-zxbv": Phase="Running", Reason="", readiness=true. Elapsed: 20.312407779s
Feb 22 14:27:26.433: INFO: Pod "pod-subpath-test-configmap-zxbv": Phase="Running", Reason="", readiness=true. Elapsed: 22.324588129s
Feb 22 14:27:28.448: INFO: Pod "pod-subpath-test-configmap-zxbv": Phase="Running", Reason="", readiness=true. Elapsed: 24.339186697s
Feb 22 14:27:30.464: INFO: Pod "pod-subpath-test-configmap-zxbv": Phase="Running", Reason="", readiness=true. Elapsed: 26.355436743s
Feb 22 14:27:32.482: INFO: Pod "pod-subpath-test-configmap-zxbv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.37331619s
STEP: Saw pod success
Feb 22 14:27:32.483: INFO: Pod "pod-subpath-test-configmap-zxbv" satisfied condition "success or failure"
Feb 22 14:27:32.488: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-configmap-zxbv container test-container-subpath-configmap-zxbv: 
STEP: delete the pod
Feb 22 14:27:32.657: INFO: Waiting for pod pod-subpath-test-configmap-zxbv to disappear
Feb 22 14:27:32.665: INFO: Pod pod-subpath-test-configmap-zxbv no longer exists
STEP: Deleting pod pod-subpath-test-configmap-zxbv
Feb 22 14:27:32.666: INFO: Deleting pod "pod-subpath-test-configmap-zxbv" in namespace "subpath-4902"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 14:27:32.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-4902" for this suite.
Feb 22 14:27:38.772: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 14:27:38.864: INFO: namespace subpath-4902 deletion completed in 6.178584449s

• [SLOW TEST:34.890 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 14:27:38.865: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-f34fe486-4a8c-474d-ba60-ba8d2bc90130
STEP: Creating a pod to test consume secrets
Feb 22 14:27:39.090: INFO: Waiting up to 5m0s for pod "pod-secrets-18cb108e-2e01-439c-ae64-4e983c5cc663" in namespace "secrets-388" to be "success or failure"
Feb 22 14:27:39.100: INFO: Pod "pod-secrets-18cb108e-2e01-439c-ae64-4e983c5cc663": Phase="Pending", Reason="", readiness=false. Elapsed: 9.355689ms
Feb 22 14:27:41.116: INFO: Pod "pod-secrets-18cb108e-2e01-439c-ae64-4e983c5cc663": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025829043s
Feb 22 14:27:43.124: INFO: Pod "pod-secrets-18cb108e-2e01-439c-ae64-4e983c5cc663": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033901267s
Feb 22 14:27:45.138: INFO: Pod "pod-secrets-18cb108e-2e01-439c-ae64-4e983c5cc663": Phase="Pending", Reason="", readiness=false. Elapsed: 6.047843896s
Feb 22 14:27:47.151: INFO: Pod "pod-secrets-18cb108e-2e01-439c-ae64-4e983c5cc663": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.060861547s
STEP: Saw pod success
Feb 22 14:27:47.152: INFO: Pod "pod-secrets-18cb108e-2e01-439c-ae64-4e983c5cc663" satisfied condition "success or failure"
Feb 22 14:27:47.156: INFO: Trying to get logs from node iruya-node pod pod-secrets-18cb108e-2e01-439c-ae64-4e983c5cc663 container secret-volume-test: 
STEP: delete the pod
Feb 22 14:27:47.240: INFO: Waiting for pod pod-secrets-18cb108e-2e01-439c-ae64-4e983c5cc663 to disappear
Feb 22 14:27:47.245: INFO: Pod pod-secrets-18cb108e-2e01-439c-ae64-4e983c5cc663 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 14:27:47.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-388" for this suite.
Feb 22 14:27:53.269: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 14:27:53.431: INFO: namespace secrets-388 deletion completed in 6.179602846s

• [SLOW TEST:14.566 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 14:27:53.431: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-d556fffb-41da-460e-a85a-b1fc6f78039f
STEP: Creating a pod to test consume configMaps
Feb 22 14:27:53.598: INFO: Waiting up to 5m0s for pod "pod-configmaps-a8cbd27c-cece-4c3b-a13c-a0d55e64440c" in namespace "configmap-1737" to be "success or failure"
Feb 22 14:27:53.684: INFO: Pod "pod-configmaps-a8cbd27c-cece-4c3b-a13c-a0d55e64440c": Phase="Pending", Reason="", readiness=false. Elapsed: 85.680369ms
Feb 22 14:27:55.716: INFO: Pod "pod-configmaps-a8cbd27c-cece-4c3b-a13c-a0d55e64440c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.117616287s
Feb 22 14:27:57.725: INFO: Pod "pod-configmaps-a8cbd27c-cece-4c3b-a13c-a0d55e64440c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.126530455s
Feb 22 14:27:59.748: INFO: Pod "pod-configmaps-a8cbd27c-cece-4c3b-a13c-a0d55e64440c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.149258656s
Feb 22 14:28:01.861: INFO: Pod "pod-configmaps-a8cbd27c-cece-4c3b-a13c-a0d55e64440c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.262657111s
Feb 22 14:28:03.876: INFO: Pod "pod-configmaps-a8cbd27c-cece-4c3b-a13c-a0d55e64440c": Phase="Pending", Reason="", readiness=false. Elapsed: 10.277973759s
Feb 22 14:28:05.889: INFO: Pod "pod-configmaps-a8cbd27c-cece-4c3b-a13c-a0d55e64440c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.290567827s
STEP: Saw pod success
Feb 22 14:28:05.889: INFO: Pod "pod-configmaps-a8cbd27c-cece-4c3b-a13c-a0d55e64440c" satisfied condition "success or failure"
Feb 22 14:28:05.895: INFO: Trying to get logs from node iruya-node pod pod-configmaps-a8cbd27c-cece-4c3b-a13c-a0d55e64440c container configmap-volume-test: 
STEP: delete the pod
Feb 22 14:28:05.987: INFO: Waiting for pod pod-configmaps-a8cbd27c-cece-4c3b-a13c-a0d55e64440c to disappear
Feb 22 14:28:05.995: INFO: Pod pod-configmaps-a8cbd27c-cece-4c3b-a13c-a0d55e64440c no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 14:28:05.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1737" for this suite.
Feb 22 14:28:12.363: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 14:28:12.537: INFO: namespace configmap-1737 deletion completed in 6.477692142s

• [SLOW TEST:19.106 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 14:28:12.539: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-e96a981c-04e3-4715-9107-0d199c7e1514
STEP: Creating a pod to test consume configMaps
Feb 22 14:28:12.657: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-6b46367d-6999-4227-88fd-f37009952191" in namespace "projected-5756" to be "success or failure"
Feb 22 14:28:12.672: INFO: Pod "pod-projected-configmaps-6b46367d-6999-4227-88fd-f37009952191": Phase="Pending", Reason="", readiness=false. Elapsed: 14.85334ms
Feb 22 14:28:14.681: INFO: Pod "pod-projected-configmaps-6b46367d-6999-4227-88fd-f37009952191": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023438956s
Feb 22 14:28:16.690: INFO: Pod "pod-projected-configmaps-6b46367d-6999-4227-88fd-f37009952191": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031996236s
Feb 22 14:28:18.709: INFO: Pod "pod-projected-configmaps-6b46367d-6999-4227-88fd-f37009952191": Phase="Pending", Reason="", readiness=false. Elapsed: 6.05143337s
Feb 22 14:28:20.719: INFO: Pod "pod-projected-configmaps-6b46367d-6999-4227-88fd-f37009952191": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.061258998s
STEP: Saw pod success
Feb 22 14:28:20.719: INFO: Pod "pod-projected-configmaps-6b46367d-6999-4227-88fd-f37009952191" satisfied condition "success or failure"
Feb 22 14:28:20.724: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-6b46367d-6999-4227-88fd-f37009952191 container projected-configmap-volume-test: 
STEP: delete the pod
Feb 22 14:28:20.790: INFO: Waiting for pod pod-projected-configmaps-6b46367d-6999-4227-88fd-f37009952191 to disappear
Feb 22 14:28:20.797: INFO: Pod pod-projected-configmaps-6b46367d-6999-4227-88fd-f37009952191 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 14:28:20.797: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5756" for this suite.
Feb 22 14:28:26.832: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 14:28:26.932: INFO: namespace projected-5756 deletion completed in 6.128118001s

• [SLOW TEST:14.393 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 14:28:26.933: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name cm-test-opt-del-932a0178-c4b8-47fb-9ffc-e2267a7f8ec8
STEP: Creating configMap with name cm-test-opt-upd-05fe1c3f-9d55-4701-8c8a-9ef07eb1b54d
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-932a0178-c4b8-47fb-9ffc-e2267a7f8ec8
STEP: Updating configmap cm-test-opt-upd-05fe1c3f-9d55-4701-8c8a-9ef07eb1b54d
STEP: Creating configMap with name cm-test-opt-create-c3c48d0b-d94d-4bd6-b008-1505f584f274
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 14:28:41.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4115" for this suite.
Feb 22 14:29:05.375: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 14:29:05.502: INFO: namespace configmap-4115 deletion completed in 24.155638592s

• [SLOW TEST:38.569 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 14:29:05.502: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 14:29:14.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-7213" for this suite.
Feb 22 14:29:36.776: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 14:29:36.903: INFO: namespace replication-controller-7213 deletion completed in 22.153707513s

• [SLOW TEST:31.401 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 14:29:36.904: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 22 14:29:37.050: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Feb 22 14:29:45.592: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Feb 22 14:29:55.659: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:deployment-1296,SelfLink:/apis/apps/v1/namespaces/deployment-1296/deployments/test-cleanup-deployment,UID:7f25ae29-4767-4d8f-9fb3-8cb4905387a2,ResourceVersion:25336525,Generation:1,CreationTimestamp:2020-02-22 14:29:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 1,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-02-22 14:29:45 +0000 UTC 2020-02-22 14:29:45 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-02-22 14:29:53 +0000 UTC 2020-02-22 14:29:45 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-cleanup-deployment-55bbcbc84c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Feb 22 14:29:55.669: INFO: New ReplicaSet "test-cleanup-deployment-55bbcbc84c" of Deployment "test-cleanup-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c,GenerateName:,Namespace:deployment-1296,SelfLink:/apis/apps/v1/namespaces/deployment-1296/replicasets/test-cleanup-deployment-55bbcbc84c,UID:f2669e28-9076-49a1-bf61-d19004f85a9f,ResourceVersion:25336514,Generation:1,CreationTimestamp:2020-02-22 14:29:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 7f25ae29-4767-4d8f-9fb3-8cb4905387a2 0xc0031867c7 0xc0031867c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Feb 22 14:29:55.674: INFO: Pod "test-cleanup-deployment-55bbcbc84c-gckxv" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c-gckxv,GenerateName:test-cleanup-deployment-55bbcbc84c-,Namespace:deployment-1296,SelfLink:/api/v1/namespaces/deployment-1296/pods/test-cleanup-deployment-55bbcbc84c-gckxv,UID:86a9ca2d-0470-4d54-9f31-ae3bf4b0cf93,ResourceVersion:25336513,Generation:0,CreationTimestamp:2020-02-22 14:29:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-deployment-55bbcbc84c f2669e28-9076-49a1-bf61-d19004f85a9f 0xc001d4bfd7 0xc001d4bfd8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-z2st6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-z2st6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-z2st6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0027ee050} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0027ee160}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-22 14:29:45 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-22 14:29:53 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-22 14:29:53 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-22 14:29:45 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2020-02-22 14:29:45 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-02-22 14:29:52 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://75851b6668314815f13725bbf1a0f5cdd07d66498713a2dff053e1e9edfa674d}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 14:29:55.674: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-1296" for this suite.
Feb 22 14:30:01.711: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 14:30:01.826: INFO: namespace deployment-1296 deletion completed in 6.146699563s

• [SLOW TEST:24.922 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 14:30:01.827: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test env composition
Feb 22 14:30:02.051: INFO: Waiting up to 5m0s for pod "var-expansion-49cc1a42-0055-4168-9839-c1c04e29f937" in namespace "var-expansion-4573" to be "success or failure"
Feb 22 14:30:02.090: INFO: Pod "var-expansion-49cc1a42-0055-4168-9839-c1c04e29f937": Phase="Pending", Reason="", readiness=false. Elapsed: 38.277375ms
Feb 22 14:30:04.102: INFO: Pod "var-expansion-49cc1a42-0055-4168-9839-c1c04e29f937": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050297808s
Feb 22 14:30:06.113: INFO: Pod "var-expansion-49cc1a42-0055-4168-9839-c1c04e29f937": Phase="Pending", Reason="", readiness=false. Elapsed: 4.06181708s
Feb 22 14:30:08.122: INFO: Pod "var-expansion-49cc1a42-0055-4168-9839-c1c04e29f937": Phase="Pending", Reason="", readiness=false. Elapsed: 6.069992305s
Feb 22 14:30:10.136: INFO: Pod "var-expansion-49cc1a42-0055-4168-9839-c1c04e29f937": Phase="Pending", Reason="", readiness=false. Elapsed: 8.084201487s
Feb 22 14:30:12.143: INFO: Pod "var-expansion-49cc1a42-0055-4168-9839-c1c04e29f937": Phase="Running", Reason="", readiness=true. Elapsed: 10.091527394s
Feb 22 14:30:14.154: INFO: Pod "var-expansion-49cc1a42-0055-4168-9839-c1c04e29f937": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.102206553s
STEP: Saw pod success
Feb 22 14:30:14.162: INFO: Pod "var-expansion-49cc1a42-0055-4168-9839-c1c04e29f937" satisfied condition "success or failure"
Feb 22 14:30:14.169: INFO: Trying to get logs from node iruya-node pod var-expansion-49cc1a42-0055-4168-9839-c1c04e29f937 container dapi-container: 
STEP: delete the pod
Feb 22 14:30:14.385: INFO: Waiting for pod var-expansion-49cc1a42-0055-4168-9839-c1c04e29f937 to disappear
Feb 22 14:30:14.391: INFO: Pod var-expansion-49cc1a42-0055-4168-9839-c1c04e29f937 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 14:30:14.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-4573" for this suite.
Feb 22 14:30:20.420: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 14:30:20.575: INFO: namespace var-expansion-4573 deletion completed in 6.176838792s

• [SLOW TEST:18.749 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 14:30:20.576: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on tmpfs
Feb 22 14:30:20.709: INFO: Waiting up to 5m0s for pod "pod-1b4fdbd5-b30e-4e4f-b000-7bed09949a81" in namespace "emptydir-9698" to be "success or failure"
Feb 22 14:30:20.715: INFO: Pod "pod-1b4fdbd5-b30e-4e4f-b000-7bed09949a81": Phase="Pending", Reason="", readiness=false. Elapsed: 5.481932ms
Feb 22 14:30:22.723: INFO: Pod "pod-1b4fdbd5-b30e-4e4f-b000-7bed09949a81": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013383006s
Feb 22 14:30:24.734: INFO: Pod "pod-1b4fdbd5-b30e-4e4f-b000-7bed09949a81": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024840408s
Feb 22 14:30:26.741: INFO: Pod "pod-1b4fdbd5-b30e-4e4f-b000-7bed09949a81": Phase="Pending", Reason="", readiness=false. Elapsed: 6.031906279s
Feb 22 14:30:28.754: INFO: Pod "pod-1b4fdbd5-b30e-4e4f-b000-7bed09949a81": Phase="Pending", Reason="", readiness=false. Elapsed: 8.044652213s
Feb 22 14:30:30.790: INFO: Pod "pod-1b4fdbd5-b30e-4e4f-b000-7bed09949a81": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.080123368s
STEP: Saw pod success
Feb 22 14:30:30.790: INFO: Pod "pod-1b4fdbd5-b30e-4e4f-b000-7bed09949a81" satisfied condition "success or failure"
Feb 22 14:30:30.811: INFO: Trying to get logs from node iruya-node pod pod-1b4fdbd5-b30e-4e4f-b000-7bed09949a81 container test-container: 
STEP: delete the pod
Feb 22 14:30:30.983: INFO: Waiting for pod pod-1b4fdbd5-b30e-4e4f-b000-7bed09949a81 to disappear
Feb 22 14:30:31.002: INFO: Pod pod-1b4fdbd5-b30e-4e4f-b000-7bed09949a81 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 14:30:31.002: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9698" for this suite.
Feb 22 14:30:37.173: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 14:30:37.307: INFO: namespace emptydir-9698 deletion completed in 6.298023176s

• [SLOW TEST:16.731 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl label 
  should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 14:30:37.308: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1210
STEP: creating the pod
Feb 22 14:30:37.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6340'
Feb 22 14:30:39.781: INFO: stderr: ""
Feb 22 14:30:39.781: INFO: stdout: "pod/pause created\n"
Feb 22 14:30:39.781: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
Feb 22 14:30:39.782: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-6340" to be "running and ready"
Feb 22 14:30:39.876: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 94.475993ms
Feb 22 14:30:41.895: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.112969461s
Feb 22 14:30:43.906: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.124217163s
Feb 22 14:30:45.911: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.129555084s
Feb 22 14:30:47.917: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 8.13534966s
Feb 22 14:30:47.917: INFO: Pod "pause" satisfied condition "running and ready"
Feb 22 14:30:47.917: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: adding the label testing-label with value testing-label-value to a pod
Feb 22 14:30:47.917: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-6340'
Feb 22 14:30:48.110: INFO: stderr: ""
Feb 22 14:30:48.111: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
Feb 22 14:30:48.111: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-6340'
Feb 22 14:30:48.675: INFO: stderr: ""
Feb 22 14:30:48.676: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          9s    testing-label-value\n"
STEP: removing the label testing-label of a pod
Feb 22 14:30:48.677: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-6340'
Feb 22 14:30:48.780: INFO: stderr: ""
Feb 22 14:30:48.780: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
Feb 22 14:30:48.780: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-6340'
Feb 22 14:30:48.885: INFO: stderr: ""
Feb 22 14:30:48.885: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          9s    \n"
[AfterEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1217
STEP: using delete to clean up resources
Feb 22 14:30:48.885: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6340'
Feb 22 14:30:49.030: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 22 14:30:49.030: INFO: stdout: "pod \"pause\" force deleted\n"
Feb 22 14:30:49.030: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-6340'
Feb 22 14:30:49.268: INFO: stderr: "No resources found.\n"
Feb 22 14:30:49.269: INFO: stdout: ""
Feb 22 14:30:49.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-6340 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb 22 14:30:49.345: INFO: stderr: ""
Feb 22 14:30:49.346: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 14:30:49.346: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6340" for this suite.
Feb 22 14:30:55.383: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 14:30:55.540: INFO: namespace kubectl-6340 deletion completed in 6.186182273s

• [SLOW TEST:18.233 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should update the label on a resource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 14:30:55.541: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-upd-e95e9320-5a1c-4e3c-bd67-63786698ab4d
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 14:31:09.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5228" for this suite.
Feb 22 14:31:31.791: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 14:31:31.926: INFO: namespace configmap-5228 deletion completed in 22.169197372s

• [SLOW TEST:36.385 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 14:31:31.926: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-map-867fe20d-fab0-48c7-a715-3767c0159270
STEP: Creating a pod to test consume secrets
Feb 22 14:31:32.058: INFO: Waiting up to 5m0s for pod "pod-secrets-2f7cc61e-0a8e-4d56-9c81-c87c9def343a" in namespace "secrets-7555" to be "success or failure"
Feb 22 14:31:32.080: INFO: Pod "pod-secrets-2f7cc61e-0a8e-4d56-9c81-c87c9def343a": Phase="Pending", Reason="", readiness=false. Elapsed: 21.709345ms
Feb 22 14:31:34.151: INFO: Pod "pod-secrets-2f7cc61e-0a8e-4d56-9c81-c87c9def343a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.092494981s
Feb 22 14:31:36.159: INFO: Pod "pod-secrets-2f7cc61e-0a8e-4d56-9c81-c87c9def343a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.100761131s
Feb 22 14:31:38.167: INFO: Pod "pod-secrets-2f7cc61e-0a8e-4d56-9c81-c87c9def343a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.109030753s
Feb 22 14:31:40.175: INFO: Pod "pod-secrets-2f7cc61e-0a8e-4d56-9c81-c87c9def343a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.11630367s
Feb 22 14:31:42.190: INFO: Pod "pod-secrets-2f7cc61e-0a8e-4d56-9c81-c87c9def343a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.131180615s
STEP: Saw pod success
Feb 22 14:31:42.190: INFO: Pod "pod-secrets-2f7cc61e-0a8e-4d56-9c81-c87c9def343a" satisfied condition "success or failure"
Feb 22 14:31:42.204: INFO: Trying to get logs from node iruya-node pod pod-secrets-2f7cc61e-0a8e-4d56-9c81-c87c9def343a container secret-volume-test: 
STEP: delete the pod
Feb 22 14:31:42.369: INFO: Waiting for pod pod-secrets-2f7cc61e-0a8e-4d56-9c81-c87c9def343a to disappear
Feb 22 14:31:42.379: INFO: Pod pod-secrets-2f7cc61e-0a8e-4d56-9c81-c87c9def343a no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 14:31:42.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-7555" for this suite.
Feb 22 14:31:48.412: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 14:31:48.553: INFO: namespace secrets-7555 deletion completed in 6.167950744s

• [SLOW TEST:16.627 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 14:31:48.554: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a service in the namespace
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there is no service in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 14:31:56.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-8679" for this suite.
Feb 22 14:32:03.002: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 14:32:03.128: INFO: namespace namespaces-8679 deletion completed in 6.155817118s
STEP: Destroying namespace "nsdeletetest-9543" for this suite.
Feb 22 14:32:03.131: INFO: Namespace nsdeletetest-9543 was already deleted
STEP: Destroying namespace "nsdeletetest-9815" for this suite.
Feb 22 14:32:09.154: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 14:32:09.291: INFO: namespace nsdeletetest-9815 deletion completed in 6.159210654s

• [SLOW TEST:20.737 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 14:32:09.291: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 22 14:32:09.480: INFO: Creating deployment "test-recreate-deployment"
Feb 22 14:32:09.487: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Feb 22 14:32:09.573: INFO: deployment "test-recreate-deployment" doesn't have the required revision set
Feb 22 14:32:11.593: INFO: Waiting deployment "test-recreate-deployment" to complete
Feb 22 14:32:11.598: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717978729, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717978729, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717978729, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717978729, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 22 14:32:13.611: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717978729, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717978729, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717978729, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717978729, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 22 14:32:15.612: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717978729, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717978729, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717978729, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717978729, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 22 14:32:17.606: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717978729, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717978729, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717978729, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717978729, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 22 14:32:19.607: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Feb 22 14:32:19.621: INFO: Updating deployment test-recreate-deployment
Feb 22 14:32:19.621: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Feb 22 14:32:20.158: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:deployment-1326,SelfLink:/apis/apps/v1/namespaces/deployment-1326/deployments/test-recreate-deployment,UID:c1522d7f-7d0d-4a69-a507-1adef2ebfe32,ResourceVersion:25336945,Generation:2,CreationTimestamp:2020-02-22 14:32:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-02-22 14:32:20 +0000 UTC 2020-02-22 14:32:20 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-02-22 14:32:20 +0000 UTC 2020-02-22 14:32:09 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-5c8c9cc69d" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},}

Feb 22 14:32:20.211: INFO: New ReplicaSet "test-recreate-deployment-5c8c9cc69d" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d,GenerateName:,Namespace:deployment-1326,SelfLink:/apis/apps/v1/namespaces/deployment-1326/replicasets/test-recreate-deployment-5c8c9cc69d,UID:28f1d3dd-09de-4a7a-adca-df1f381d3892,ResourceVersion:25336944,Generation:1,CreationTimestamp:2020-02-22 14:32:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment c1522d7f-7d0d-4a69-a507-1adef2ebfe32 0xc000d3a387 0xc000d3a388}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb 22 14:32:20.211: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Feb 22 14:32:20.211: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-6df85df6b9,GenerateName:,Namespace:deployment-1326,SelfLink:/apis/apps/v1/namespaces/deployment-1326/replicasets/test-recreate-deployment-6df85df6b9,UID:443b49b9-f407-4158-bd78-508697d96b5f,ResourceVersion:25336932,Generation:2,CreationTimestamp:2020-02-22 14:32:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment c1522d7f-7d0d-4a69-a507-1adef2ebfe32 0xc000d3a467 0xc000d3a468}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb 22 14:32:20.218: INFO: Pod "test-recreate-deployment-5c8c9cc69d-f54pb" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d-f54pb,GenerateName:test-recreate-deployment-5c8c9cc69d-,Namespace:deployment-1326,SelfLink:/api/v1/namespaces/deployment-1326/pods/test-recreate-deployment-5c8c9cc69d-f54pb,UID:2fbc7f89-14cc-44bf-a84c-487e930716b0,ResourceVersion:25336947,Generation:0,CreationTimestamp:2020-02-22 14:32:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-5c8c9cc69d 28f1d3dd-09de-4a7a-adca-df1f381d3892 0xc002e66437 0xc002e66438}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-flprp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-flprp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-flprp true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002e664b0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002e664d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-22 14:32:20 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-22 14:32:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-22 14:32:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-22 14:32:19 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-02-22 14:32:20 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 14:32:20.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-1326" for this suite.
Feb 22 14:32:26.247: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 14:32:26.352: INFO: namespace deployment-1326 deletion completed in 6.127627307s

• [SLOW TEST:17.060 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 14:32:26.352: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
Feb 22 14:32:28.165: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-6269,SelfLink:/api/v1/namespaces/watch-6269/configmaps/e2e-watch-test-watch-closed,UID:e2b258e0-6e1d-47d2-af28-6383f3db0eed,ResourceVersion:25336983,Generation:0,CreationTimestamp:2020-02-22 14:32:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb 22 14:32:28.167: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-6269,SelfLink:/api/v1/namespaces/watch-6269/configmaps/e2e-watch-test-watch-closed,UID:e2b258e0-6e1d-47d2-af28-6383f3db0eed,ResourceVersion:25336984,Generation:0,CreationTimestamp:2020-02-22 14:32:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
Feb 22 14:32:28.243: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-6269,SelfLink:/api/v1/namespaces/watch-6269/configmaps/e2e-watch-test-watch-closed,UID:e2b258e0-6e1d-47d2-af28-6383f3db0eed,ResourceVersion:25336985,Generation:0,CreationTimestamp:2020-02-22 14:32:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb 22 14:32:28.243: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-6269,SelfLink:/api/v1/namespaces/watch-6269/configmaps/e2e-watch-test-watch-closed,UID:e2b258e0-6e1d-47d2-af28-6383f3db0eed,ResourceVersion:25336986,Generation:0,CreationTimestamp:2020-02-22 14:32:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 14:32:28.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-6269" for this suite.
Feb 22 14:32:34.353: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 14:32:34.452: INFO: namespace watch-6269 deletion completed in 6.124387312s

• [SLOW TEST:8.101 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl rolling-update 
  should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 14:32:34.453: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1516
[It] should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb 22 14:32:34.592: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-9380'
Feb 22 14:32:34.683: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb 22 14:32:34.684: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
Feb 22 14:32:34.700: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
Feb 22 14:32:34.712: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
STEP: rolling-update to same image controller
Feb 22 14:32:34.732: INFO: scanned /root for discovery docs: 
Feb 22 14:32:34.732: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-9380'
Feb 22 14:32:59.256: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Feb 22 14:32:59.256: INFO: stdout: "Created e2e-test-nginx-rc-5c20df55894a991ee6dde53041520f7c\nScaling up e2e-test-nginx-rc-5c20df55894a991ee6dde53041520f7c from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-5c20df55894a991ee6dde53041520f7c up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-5c20df55894a991ee6dde53041520f7c to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
Feb 22 14:32:59.257: INFO: stdout: "Created e2e-test-nginx-rc-5c20df55894a991ee6dde53041520f7c\nScaling up e2e-test-nginx-rc-5c20df55894a991ee6dde53041520f7c from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-5c20df55894a991ee6dde53041520f7c up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-5c20df55894a991ee6dde53041520f7c to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up.
Feb 22 14:32:59.257: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-9380'
Feb 22 14:32:59.406: INFO: stderr: ""
Feb 22 14:32:59.407: INFO: stdout: "e2e-test-nginx-rc-5c20df55894a991ee6dde53041520f7c-vvh9v "
Feb 22 14:32:59.407: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-5c20df55894a991ee6dde53041520f7c-vvh9v -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9380'
Feb 22 14:32:59.593: INFO: stderr: ""
Feb 22 14:32:59.594: INFO: stdout: "true"
Feb 22 14:32:59.594: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-5c20df55894a991ee6dde53041520f7c-vvh9v -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9380'
Feb 22 14:32:59.675: INFO: stderr: ""
Feb 22 14:32:59.675: INFO: stdout: "docker.io/library/nginx:1.14-alpine"
Feb 22 14:32:59.675: INFO: e2e-test-nginx-rc-5c20df55894a991ee6dde53041520f7c-vvh9v is verified up and running
[AfterEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1522
Feb 22 14:32:59.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-9380'
Feb 22 14:32:59.782: INFO: stderr: ""
Feb 22 14:32:59.782: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 14:32:59.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9380" for this suite.
Feb 22 14:33:21.842: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 14:33:21.922: INFO: namespace kubectl-9380 deletion completed in 22.105032744s

• [SLOW TEST:47.469 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support rolling-update to same image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 14:33:21.922: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-71e32c8b-f509-4dcf-9fcc-3b713442a53d
STEP: Creating a pod to test consume configMaps
Feb 22 14:33:22.074: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ab06b144-85bd-47ef-8335-6b6167b99d11" in namespace "projected-3117" to be "success or failure"
Feb 22 14:33:22.117: INFO: Pod "pod-projected-configmaps-ab06b144-85bd-47ef-8335-6b6167b99d11": Phase="Pending", Reason="", readiness=false. Elapsed: 42.729992ms
Feb 22 14:33:24.127: INFO: Pod "pod-projected-configmaps-ab06b144-85bd-47ef-8335-6b6167b99d11": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053226866s
Feb 22 14:33:26.135: INFO: Pod "pod-projected-configmaps-ab06b144-85bd-47ef-8335-6b6167b99d11": Phase="Pending", Reason="", readiness=false. Elapsed: 4.060614816s
Feb 22 14:33:28.142: INFO: Pod "pod-projected-configmaps-ab06b144-85bd-47ef-8335-6b6167b99d11": Phase="Pending", Reason="", readiness=false. Elapsed: 6.067454201s
Feb 22 14:33:30.159: INFO: Pod "pod-projected-configmaps-ab06b144-85bd-47ef-8335-6b6167b99d11": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.085366158s
STEP: Saw pod success
Feb 22 14:33:30.160: INFO: Pod "pod-projected-configmaps-ab06b144-85bd-47ef-8335-6b6167b99d11" satisfied condition "success or failure"
Feb 22 14:33:30.169: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-ab06b144-85bd-47ef-8335-6b6167b99d11 container projected-configmap-volume-test: 
STEP: delete the pod
Feb 22 14:33:30.254: INFO: Waiting for pod pod-projected-configmaps-ab06b144-85bd-47ef-8335-6b6167b99d11 to disappear
Feb 22 14:33:30.268: INFO: Pod pod-projected-configmaps-ab06b144-85bd-47ef-8335-6b6167b99d11 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 14:33:30.269: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3117" for this suite.
Feb 22 14:33:36.326: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 14:33:36.486: INFO: namespace projected-3117 deletion completed in 6.210725213s

• [SLOW TEST:14.564 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Aggregator 
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 14:33:36.487: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename aggregator
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76
Feb 22 14:33:36.666: INFO: >>> kubeConfig: /root/.kube/config
[It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Registering the sample API server.
Feb 22 14:33:37.232: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set
Feb 22 14:33:39.491: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717978817, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717978817, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717978817, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717978817, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 22 14:33:41.499: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717978817, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717978817, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717978817, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717978817, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 22 14:33:43.501: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717978817, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717978817, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717978817, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717978817, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 22 14:33:45.500: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717978817, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717978817, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717978817, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717978817, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 22 14:33:48.446: INFO: Waited 883.395506ms for the sample-apiserver to be ready to handle requests.
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 14:33:49.111: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "aggregator-9296" for this suite.
Feb 22 14:33:55.148: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 14:33:55.230: INFO: namespace aggregator-9296 deletion completed in 6.114279947s

• [SLOW TEST:18.743 seconds]
[sig-api-machinery] Aggregator
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 14:33:55.231: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name s-test-opt-del-5847ac5e-f7a1-4915-9d81-780362d33eee
STEP: Creating secret with name s-test-opt-upd-b7cbfd03-bf95-421c-9195-669129f1b252
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-5847ac5e-f7a1-4915-9d81-780362d33eee
STEP: Updating secret s-test-opt-upd-b7cbfd03-bf95-421c-9195-669129f1b252
STEP: Creating secret with name s-test-opt-create-fb452c84-59e6-4ba5-87da-e44b0771e8f5
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 14:34:09.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4126" for this suite.
Feb 22 14:34:31.851: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 14:34:31.937: INFO: namespace projected-4126 deletion completed in 22.153307429s

• [SLOW TEST:36.706 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 14:34:31.938: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-projected-kkcs
STEP: Creating a pod to test atomic-volume-subpath
Feb 22 14:34:32.062: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-kkcs" in namespace "subpath-5506" to be "success or failure"
Feb 22 14:34:32.074: INFO: Pod "pod-subpath-test-projected-kkcs": Phase="Pending", Reason="", readiness=false. Elapsed: 11.675176ms
Feb 22 14:34:34.085: INFO: Pod "pod-subpath-test-projected-kkcs": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022626137s
Feb 22 14:34:36.094: INFO: Pod "pod-subpath-test-projected-kkcs": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031368256s
Feb 22 14:34:38.109: INFO: Pod "pod-subpath-test-projected-kkcs": Phase="Pending", Reason="", readiness=false. Elapsed: 6.046528826s
Feb 22 14:34:40.123: INFO: Pod "pod-subpath-test-projected-kkcs": Phase="Pending", Reason="", readiness=false. Elapsed: 8.060665194s
Feb 22 14:34:42.133: INFO: Pod "pod-subpath-test-projected-kkcs": Phase="Running", Reason="", readiness=true. Elapsed: 10.070517606s
Feb 22 14:34:44.145: INFO: Pod "pod-subpath-test-projected-kkcs": Phase="Running", Reason="", readiness=true. Elapsed: 12.082579783s
Feb 22 14:34:46.153: INFO: Pod "pod-subpath-test-projected-kkcs": Phase="Running", Reason="", readiness=true. Elapsed: 14.090838499s
Feb 22 14:34:48.160: INFO: Pod "pod-subpath-test-projected-kkcs": Phase="Running", Reason="", readiness=true. Elapsed: 16.098045219s
Feb 22 14:34:50.168: INFO: Pod "pod-subpath-test-projected-kkcs": Phase="Running", Reason="", readiness=true. Elapsed: 18.105740883s
Feb 22 14:34:52.176: INFO: Pod "pod-subpath-test-projected-kkcs": Phase="Running", Reason="", readiness=true. Elapsed: 20.113797695s
Feb 22 14:34:54.184: INFO: Pod "pod-subpath-test-projected-kkcs": Phase="Running", Reason="", readiness=true. Elapsed: 22.121421223s
Feb 22 14:34:56.197: INFO: Pod "pod-subpath-test-projected-kkcs": Phase="Running", Reason="", readiness=true. Elapsed: 24.134905001s
Feb 22 14:34:58.210: INFO: Pod "pod-subpath-test-projected-kkcs": Phase="Running", Reason="", readiness=true. Elapsed: 26.147568035s
Feb 22 14:35:00.222: INFO: Pod "pod-subpath-test-projected-kkcs": Phase="Running", Reason="", readiness=true. Elapsed: 28.160195901s
Feb 22 14:35:02.235: INFO: Pod "pod-subpath-test-projected-kkcs": Phase="Running", Reason="", readiness=true. Elapsed: 30.172609044s
Feb 22 14:35:04.241: INFO: Pod "pod-subpath-test-projected-kkcs": Phase="Succeeded", Reason="", readiness=false. Elapsed: 32.179079306s
STEP: Saw pod success
Feb 22 14:35:04.241: INFO: Pod "pod-subpath-test-projected-kkcs" satisfied condition "success or failure"
Feb 22 14:35:04.245: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-projected-kkcs container test-container-subpath-projected-kkcs: 
STEP: delete the pod
Feb 22 14:35:04.407: INFO: Waiting for pod pod-subpath-test-projected-kkcs to disappear
Feb 22 14:35:04.538: INFO: Pod pod-subpath-test-projected-kkcs no longer exists
STEP: Deleting pod pod-subpath-test-projected-kkcs
Feb 22 14:35:04.538: INFO: Deleting pod "pod-subpath-test-projected-kkcs" in namespace "subpath-5506"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 14:35:04.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-5506" for this suite.
Feb 22 14:35:10.631: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 14:35:10.788: INFO: namespace subpath-5506 deletion completed in 6.200208229s

• [SLOW TEST:38.850 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with projected pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 14:35:10.788: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8217.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-8217.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8217.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-8217.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-8217.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-8217.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-8217.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-8217.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-8217.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-8217.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-8217.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-8217.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8217.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 220.4.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.4.220_udp@PTR;check="$$(dig +tcp +noall +answer +search 220.4.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.4.220_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8217.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-8217.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8217.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-8217.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-8217.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-8217.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-8217.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-8217.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-8217.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-8217.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-8217.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-8217.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8217.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 220.4.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.4.220_udp@PTR;check="$$(dig +tcp +noall +answer +search 220.4.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.4.220_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb 22 14:35:25.144: INFO: Unable to read wheezy_udp@dns-test-service.dns-8217.svc.cluster.local from pod dns-8217/dns-test-26535813-6ae9-4341-a5fd-6c903acda637: the server could not find the requested resource (get pods dns-test-26535813-6ae9-4341-a5fd-6c903acda637)
Feb 22 14:35:25.151: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8217.svc.cluster.local from pod dns-8217/dns-test-26535813-6ae9-4341-a5fd-6c903acda637: the server could not find the requested resource (get pods dns-test-26535813-6ae9-4341-a5fd-6c903acda637)
Feb 22 14:35:25.158: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8217.svc.cluster.local from pod dns-8217/dns-test-26535813-6ae9-4341-a5fd-6c903acda637: the server could not find the requested resource (get pods dns-test-26535813-6ae9-4341-a5fd-6c903acda637)
Feb 22 14:35:25.163: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8217.svc.cluster.local from pod dns-8217/dns-test-26535813-6ae9-4341-a5fd-6c903acda637: the server could not find the requested resource (get pods dns-test-26535813-6ae9-4341-a5fd-6c903acda637)
Feb 22 14:35:25.172: INFO: Unable to read wheezy_udp@_http._tcp.test-service-2.dns-8217.svc.cluster.local from pod dns-8217/dns-test-26535813-6ae9-4341-a5fd-6c903acda637: the server could not find the requested resource (get pods dns-test-26535813-6ae9-4341-a5fd-6c903acda637)
Feb 22 14:35:25.178: INFO: Unable to read wheezy_tcp@_http._tcp.test-service-2.dns-8217.svc.cluster.local from pod dns-8217/dns-test-26535813-6ae9-4341-a5fd-6c903acda637: the server could not find the requested resource (get pods dns-test-26535813-6ae9-4341-a5fd-6c903acda637)
Feb 22 14:35:25.183: INFO: Unable to read wheezy_udp@PodARecord from pod dns-8217/dns-test-26535813-6ae9-4341-a5fd-6c903acda637: the server could not find the requested resource (get pods dns-test-26535813-6ae9-4341-a5fd-6c903acda637)
Feb 22 14:35:25.189: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-8217/dns-test-26535813-6ae9-4341-a5fd-6c903acda637: the server could not find the requested resource (get pods dns-test-26535813-6ae9-4341-a5fd-6c903acda637)
Feb 22 14:35:25.193: INFO: Unable to read 10.105.4.220_udp@PTR from pod dns-8217/dns-test-26535813-6ae9-4341-a5fd-6c903acda637: the server could not find the requested resource (get pods dns-test-26535813-6ae9-4341-a5fd-6c903acda637)
Feb 22 14:35:25.199: INFO: Unable to read 10.105.4.220_tcp@PTR from pod dns-8217/dns-test-26535813-6ae9-4341-a5fd-6c903acda637: the server could not find the requested resource (get pods dns-test-26535813-6ae9-4341-a5fd-6c903acda637)
Feb 22 14:35:25.208: INFO: Unable to read jessie_udp@dns-test-service.dns-8217.svc.cluster.local from pod dns-8217/dns-test-26535813-6ae9-4341-a5fd-6c903acda637: the server could not find the requested resource (get pods dns-test-26535813-6ae9-4341-a5fd-6c903acda637)
Feb 22 14:35:25.212: INFO: Unable to read jessie_tcp@dns-test-service.dns-8217.svc.cluster.local from pod dns-8217/dns-test-26535813-6ae9-4341-a5fd-6c903acda637: the server could not find the requested resource (get pods dns-test-26535813-6ae9-4341-a5fd-6c903acda637)
Feb 22 14:35:25.217: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8217.svc.cluster.local from pod dns-8217/dns-test-26535813-6ae9-4341-a5fd-6c903acda637: the server could not find the requested resource (get pods dns-test-26535813-6ae9-4341-a5fd-6c903acda637)
Feb 22 14:35:25.222: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8217.svc.cluster.local from pod dns-8217/dns-test-26535813-6ae9-4341-a5fd-6c903acda637: the server could not find the requested resource (get pods dns-test-26535813-6ae9-4341-a5fd-6c903acda637)
Feb 22 14:35:25.226: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.dns-8217.svc.cluster.local from pod dns-8217/dns-test-26535813-6ae9-4341-a5fd-6c903acda637: the server could not find the requested resource (get pods dns-test-26535813-6ae9-4341-a5fd-6c903acda637)
Feb 22 14:35:25.230: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.dns-8217.svc.cluster.local from pod dns-8217/dns-test-26535813-6ae9-4341-a5fd-6c903acda637: the server could not find the requested resource (get pods dns-test-26535813-6ae9-4341-a5fd-6c903acda637)
Feb 22 14:35:25.235: INFO: Unable to read jessie_udp@PodARecord from pod dns-8217/dns-test-26535813-6ae9-4341-a5fd-6c903acda637: the server could not find the requested resource (get pods dns-test-26535813-6ae9-4341-a5fd-6c903acda637)
Feb 22 14:35:25.239: INFO: Unable to read jessie_tcp@PodARecord from pod dns-8217/dns-test-26535813-6ae9-4341-a5fd-6c903acda637: the server could not find the requested resource (get pods dns-test-26535813-6ae9-4341-a5fd-6c903acda637)
Feb 22 14:35:25.243: INFO: Unable to read 10.105.4.220_udp@PTR from pod dns-8217/dns-test-26535813-6ae9-4341-a5fd-6c903acda637: the server could not find the requested resource (get pods dns-test-26535813-6ae9-4341-a5fd-6c903acda637)
Feb 22 14:35:25.248: INFO: Unable to read 10.105.4.220_tcp@PTR from pod dns-8217/dns-test-26535813-6ae9-4341-a5fd-6c903acda637: the server could not find the requested resource (get pods dns-test-26535813-6ae9-4341-a5fd-6c903acda637)
Feb 22 14:35:25.248: INFO: Lookups using dns-8217/dns-test-26535813-6ae9-4341-a5fd-6c903acda637 failed for: [wheezy_udp@dns-test-service.dns-8217.svc.cluster.local wheezy_tcp@dns-test-service.dns-8217.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8217.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8217.svc.cluster.local wheezy_udp@_http._tcp.test-service-2.dns-8217.svc.cluster.local wheezy_tcp@_http._tcp.test-service-2.dns-8217.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord 10.105.4.220_udp@PTR 10.105.4.220_tcp@PTR jessie_udp@dns-test-service.dns-8217.svc.cluster.local jessie_tcp@dns-test-service.dns-8217.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8217.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8217.svc.cluster.local jessie_udp@_http._tcp.test-service-2.dns-8217.svc.cluster.local jessie_tcp@_http._tcp.test-service-2.dns-8217.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord 10.105.4.220_udp@PTR 10.105.4.220_tcp@PTR]

Feb 22 14:35:30.418: INFO: DNS probes using dns-8217/dns-test-26535813-6ae9-4341-a5fd-6c903acda637 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 14:35:31.203: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-8217" for this suite.
Feb 22 14:35:37.375: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 14:35:37.488: INFO: namespace dns-8217 deletion completed in 6.267292265s

• [SLOW TEST:26.700 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 14:35:37.489: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Feb 22 14:35:37.563: INFO: Pod name pod-release: Found 0 pods out of 1
Feb 22 14:35:42.577: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 14:35:43.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-2311" for this suite.
Feb 22 14:35:49.748: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 14:35:49.877: INFO: namespace replication-controller-2311 deletion completed in 6.172322909s

• [SLOW TEST:12.389 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 14:35:49.879: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap configmap-6078/configmap-test-b5bde600-e6ce-4bc5-89b3-70b529836f04
STEP: Creating a pod to test consume configMaps
Feb 22 14:35:50.063: INFO: Waiting up to 5m0s for pod "pod-configmaps-b6e485c0-2f56-43e8-baa9-a5f7e8fcc48a" in namespace "configmap-6078" to be "success or failure"
Feb 22 14:35:50.078: INFO: Pod "pod-configmaps-b6e485c0-2f56-43e8-baa9-a5f7e8fcc48a": Phase="Pending", Reason="", readiness=false. Elapsed: 14.024559ms
Feb 22 14:35:52.088: INFO: Pod "pod-configmaps-b6e485c0-2f56-43e8-baa9-a5f7e8fcc48a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024620419s
Feb 22 14:35:54.118: INFO: Pod "pod-configmaps-b6e485c0-2f56-43e8-baa9-a5f7e8fcc48a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053901921s
Feb 22 14:35:56.129: INFO: Pod "pod-configmaps-b6e485c0-2f56-43e8-baa9-a5f7e8fcc48a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.064843965s
Feb 22 14:35:58.135: INFO: Pod "pod-configmaps-b6e485c0-2f56-43e8-baa9-a5f7e8fcc48a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.071495564s
Feb 22 14:36:00.192: INFO: Pod "pod-configmaps-b6e485c0-2f56-43e8-baa9-a5f7e8fcc48a": Phase="Pending", Reason="", readiness=false. Elapsed: 10.128548604s
Feb 22 14:36:02.199: INFO: Pod "pod-configmaps-b6e485c0-2f56-43e8-baa9-a5f7e8fcc48a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.135013152s
STEP: Saw pod success
Feb 22 14:36:02.199: INFO: Pod "pod-configmaps-b6e485c0-2f56-43e8-baa9-a5f7e8fcc48a" satisfied condition "success or failure"
Feb 22 14:36:02.201: INFO: Trying to get logs from node iruya-node pod pod-configmaps-b6e485c0-2f56-43e8-baa9-a5f7e8fcc48a container env-test: 
STEP: delete the pod
Feb 22 14:36:02.314: INFO: Waiting for pod pod-configmaps-b6e485c0-2f56-43e8-baa9-a5f7e8fcc48a to disappear
Feb 22 14:36:02.627: INFO: Pod pod-configmaps-b6e485c0-2f56-43e8-baa9-a5f7e8fcc48a no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 14:36:02.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6078" for this suite.
Feb 22 14:36:08.657: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 14:36:08.788: INFO: namespace configmap-6078 deletion completed in 6.153732156s

• [SLOW TEST:18.910 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 14:36:08.789: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Feb 22 14:36:08.889: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Feb 22 14:36:08.914: INFO: Waiting for terminating namespaces to be deleted...
Feb 22 14:36:08.919: INFO: 
Logging pods the kubelet thinks is on node iruya-node before test
Feb 22 14:36:08.932: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded)
Feb 22 14:36:08.932: INFO: 	Container weave ready: true, restart count 0
Feb 22 14:36:08.932: INFO: 	Container weave-npc ready: true, restart count 0
Feb 22 14:36:08.933: INFO: kube-bench-j7kcs from default started at 2020-02-11 06:42:30 +0000 UTC (1 container statuses recorded)
Feb 22 14:36:08.933: INFO: 	Container kube-bench ready: false, restart count 0
Feb 22 14:36:08.933: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded)
Feb 22 14:36:08.933: INFO: 	Container kube-proxy ready: true, restart count 0
Feb 22 14:36:08.933: INFO: 
Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test
Feb 22 14:36:08.948: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded)
Feb 22 14:36:08.948: INFO: 	Container kube-apiserver ready: true, restart count 0
Feb 22 14:36:08.948: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded)
Feb 22 14:36:08.948: INFO: 	Container kube-scheduler ready: true, restart count 15
Feb 22 14:36:08.948: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Feb 22 14:36:08.948: INFO: 	Container coredns ready: true, restart count 0
Feb 22 14:36:08.948: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded)
Feb 22 14:36:08.948: INFO: 	Container etcd ready: true, restart count 0
Feb 22 14:36:08.948: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded)
Feb 22 14:36:08.948: INFO: 	Container weave ready: true, restart count 0
Feb 22 14:36:08.948: INFO: 	Container weave-npc ready: true, restart count 0
Feb 22 14:36:08.948: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Feb 22 14:36:08.948: INFO: 	Container coredns ready: true, restart count 0
Feb 22 14:36:08.948: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded)
Feb 22 14:36:08.948: INFO: 	Container kube-controller-manager ready: true, restart count 23
Feb 22 14:36:08.948: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded)
Feb 22 14:36:08.948: INFO: 	Container kube-proxy ready: true, restart count 0
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: verifying the node has the label node iruya-node
STEP: verifying the node has the label node iruya-server-sfge57q7djm7
Feb 22 14:36:09.186: INFO: Pod coredns-5c98db65d4-bm4gs requesting resource cpu=100m on Node iruya-server-sfge57q7djm7
Feb 22 14:36:09.186: INFO: Pod coredns-5c98db65d4-xx8w8 requesting resource cpu=100m on Node iruya-server-sfge57q7djm7
Feb 22 14:36:09.186: INFO: Pod etcd-iruya-server-sfge57q7djm7 requesting resource cpu=0m on Node iruya-server-sfge57q7djm7
Feb 22 14:36:09.186: INFO: Pod kube-apiserver-iruya-server-sfge57q7djm7 requesting resource cpu=250m on Node iruya-server-sfge57q7djm7
Feb 22 14:36:09.186: INFO: Pod kube-controller-manager-iruya-server-sfge57q7djm7 requesting resource cpu=200m on Node iruya-server-sfge57q7djm7
Feb 22 14:36:09.186: INFO: Pod kube-proxy-58v95 requesting resource cpu=0m on Node iruya-server-sfge57q7djm7
Feb 22 14:36:09.186: INFO: Pod kube-proxy-976zl requesting resource cpu=0m on Node iruya-node
Feb 22 14:36:09.186: INFO: Pod kube-scheduler-iruya-server-sfge57q7djm7 requesting resource cpu=100m on Node iruya-server-sfge57q7djm7
Feb 22 14:36:09.186: INFO: Pod weave-net-bzl4d requesting resource cpu=20m on Node iruya-server-sfge57q7djm7
Feb 22 14:36:09.186: INFO: Pod weave-net-rlp57 requesting resource cpu=20m on Node iruya-node
STEP: Starting Pods to consume most of the cluster CPU.
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-c1e81a74-43b3-43e2-97fe-86a36beac6ec.15f5c032fa0dac41], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6926/filler-pod-c1e81a74-43b3-43e2-97fe-86a36beac6ec to iruya-server-sfge57q7djm7]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-c1e81a74-43b3-43e2-97fe-86a36beac6ec.15f5c034417257ed], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-c1e81a74-43b3-43e2-97fe-86a36beac6ec.15f5c0355b39ed02], Reason = [Created], Message = [Created container filler-pod-c1e81a74-43b3-43e2-97fe-86a36beac6ec]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-c1e81a74-43b3-43e2-97fe-86a36beac6ec.15f5c03579a7be3d], Reason = [Started], Message = [Started container filler-pod-c1e81a74-43b3-43e2-97fe-86a36beac6ec]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-cde11524-628a-4759-b95d-e9e77f226167.15f5c032f7bcbad8], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6926/filler-pod-cde11524-628a-4759-b95d-e9e77f226167 to iruya-node]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-cde11524-628a-4759-b95d-e9e77f226167.15f5c03435ea9631], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-cde11524-628a-4759-b95d-e9e77f226167.15f5c03522097c1d], Reason = [Created], Message = [Created container filler-pod-cde11524-628a-4759-b95d-e9e77f226167]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-cde11524-628a-4759-b95d-e9e77f226167.15f5c03558eed12f], Reason = [Started], Message = [Started container filler-pod-cde11524-628a-4759-b95d-e9e77f226167]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.15f5c035c80db1c3], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 Insufficient cpu.]
STEP: removing the label node off the node iruya-node
STEP: verifying the node doesn't have the label node
STEP: removing the label node off the node iruya-server-sfge57q7djm7
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 14:36:22.509: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-6926" for this suite.
Feb 22 14:36:30.550: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 14:36:30.668: INFO: namespace sched-pred-6926 deletion completed in 8.152024282s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:21.880 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 14:36:30.669: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there are no pods in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 14:37:20.901: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-4916" for this suite.
Feb 22 14:37:26.970: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 14:37:27.142: INFO: namespace namespaces-4916 deletion completed in 6.18896114s
STEP: Destroying namespace "nsdeletetest-3031" for this suite.
Feb 22 14:37:27.144: INFO: Namespace nsdeletetest-3031 was already deleted
STEP: Destroying namespace "nsdeletetest-9082" for this suite.
Feb 22 14:37:33.194: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 14:37:33.302: INFO: namespace nsdeletetest-9082 deletion completed in 6.158035629s

• [SLOW TEST:62.633 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 14:37:33.303: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 14:37:33.692: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-5277" for this suite.
Feb 22 14:37:39.756: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 14:37:39.986: INFO: namespace kubelet-test-5277 deletion completed in 6.26230825s

• [SLOW TEST:6.683 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should be possible to delete [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 14:37:39.986: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 22 14:37:40.211: INFO: Waiting up to 5m0s for pod "downwardapi-volume-837cabb2-aa97-4f88-ae1a-b52ebbb4e853" in namespace "projected-1711" to be "success or failure"
Feb 22 14:37:40.221: INFO: Pod "downwardapi-volume-837cabb2-aa97-4f88-ae1a-b52ebbb4e853": Phase="Pending", Reason="", readiness=false. Elapsed: 10.488308ms
Feb 22 14:37:42.234: INFO: Pod "downwardapi-volume-837cabb2-aa97-4f88-ae1a-b52ebbb4e853": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02313352s
Feb 22 14:37:44.247: INFO: Pod "downwardapi-volume-837cabb2-aa97-4f88-ae1a-b52ebbb4e853": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036186164s
Feb 22 14:37:46.259: INFO: Pod "downwardapi-volume-837cabb2-aa97-4f88-ae1a-b52ebbb4e853": Phase="Pending", Reason="", readiness=false. Elapsed: 6.048129139s
Feb 22 14:37:48.267: INFO: Pod "downwardapi-volume-837cabb2-aa97-4f88-ae1a-b52ebbb4e853": Phase="Pending", Reason="", readiness=false. Elapsed: 8.056226267s
Feb 22 14:37:50.276: INFO: Pod "downwardapi-volume-837cabb2-aa97-4f88-ae1a-b52ebbb4e853": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.064713457s
STEP: Saw pod success
Feb 22 14:37:50.276: INFO: Pod "downwardapi-volume-837cabb2-aa97-4f88-ae1a-b52ebbb4e853" satisfied condition "success or failure"
Feb 22 14:37:50.280: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-837cabb2-aa97-4f88-ae1a-b52ebbb4e853 container client-container: 
STEP: delete the pod
Feb 22 14:37:50.413: INFO: Waiting for pod downwardapi-volume-837cabb2-aa97-4f88-ae1a-b52ebbb4e853 to disappear
Feb 22 14:37:50.439: INFO: Pod downwardapi-volume-837cabb2-aa97-4f88-ae1a-b52ebbb4e853 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 14:37:50.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1711" for this suite.
Feb 22 14:37:56.479: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 14:37:56.605: INFO: namespace projected-1711 deletion completed in 6.158739561s

• [SLOW TEST:16.618 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 14:37:56.606: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 22 14:38:26.775: INFO: Container started at 2020-02-22 14:38:03 +0000 UTC, pod became ready at 2020-02-22 14:38:26 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 14:38:26.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-7330" for this suite.
Feb 22 14:38:48.803: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 14:38:48.935: INFO: namespace container-probe-7330 deletion completed in 22.151855131s

• [SLOW TEST:52.329 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 14:38:48.936: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1292
STEP: creating an rc
Feb 22 14:38:49.101: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4904'
Feb 22 14:38:49.509: INFO: stderr: ""
Feb 22 14:38:49.509: INFO: stdout: "replicationcontroller/redis-master created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Waiting for Redis master to start.
Feb 22 14:38:50.527: INFO: Selector matched 1 pods for map[app:redis]
Feb 22 14:38:50.527: INFO: Found 0 / 1
Feb 22 14:38:51.517: INFO: Selector matched 1 pods for map[app:redis]
Feb 22 14:38:51.517: INFO: Found 0 / 1
Feb 22 14:38:52.523: INFO: Selector matched 1 pods for map[app:redis]
Feb 22 14:38:52.523: INFO: Found 0 / 1
Feb 22 14:38:53.523: INFO: Selector matched 1 pods for map[app:redis]
Feb 22 14:38:53.523: INFO: Found 0 / 1
Feb 22 14:38:54.520: INFO: Selector matched 1 pods for map[app:redis]
Feb 22 14:38:54.520: INFO: Found 0 / 1
Feb 22 14:38:55.518: INFO: Selector matched 1 pods for map[app:redis]
Feb 22 14:38:55.518: INFO: Found 0 / 1
Feb 22 14:38:56.524: INFO: Selector matched 1 pods for map[app:redis]
Feb 22 14:38:56.525: INFO: Found 0 / 1
Feb 22 14:38:57.519: INFO: Selector matched 1 pods for map[app:redis]
Feb 22 14:38:57.519: INFO: Found 0 / 1
Feb 22 14:38:58.521: INFO: Selector matched 1 pods for map[app:redis]
Feb 22 14:38:58.521: INFO: Found 1 / 1
Feb 22 14:38:58.521: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Feb 22 14:38:58.527: INFO: Selector matched 1 pods for map[app:redis]
Feb 22 14:38:58.527: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
STEP: checking for a matching strings
Feb 22 14:38:58.528: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-mr6k2 redis-master --namespace=kubectl-4904'
Feb 22 14:38:58.683: INFO: stderr: ""
Feb 22 14:38:58.683: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 22 Feb 14:38:57.077 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 22 Feb 14:38:57.078 # Server started, Redis version 3.2.12\n1:M 22 Feb 14:38:57.078 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 22 Feb 14:38:57.078 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log lines
Feb 22 14:38:58.683: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-mr6k2 redis-master --namespace=kubectl-4904 --tail=1'
Feb 22 14:38:58.797: INFO: stderr: ""
Feb 22 14:38:58.797: INFO: stdout: "1:M 22 Feb 14:38:57.078 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log bytes
Feb 22 14:38:58.797: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-mr6k2 redis-master --namespace=kubectl-4904 --limit-bytes=1'
Feb 22 14:38:58.931: INFO: stderr: ""
Feb 22 14:38:58.931: INFO: stdout: " "
STEP: exposing timestamps
Feb 22 14:38:58.931: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-mr6k2 redis-master --namespace=kubectl-4904 --tail=1 --timestamps'
Feb 22 14:38:59.073: INFO: stderr: ""
Feb 22 14:38:59.073: INFO: stdout: "2020-02-22T14:38:57.078815174Z 1:M 22 Feb 14:38:57.078 * The server is now ready to accept connections on port 6379\n"
STEP: restricting to a time range
Feb 22 14:39:01.574: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-mr6k2 redis-master --namespace=kubectl-4904 --since=1s'
Feb 22 14:39:01.826: INFO: stderr: ""
Feb 22 14:39:01.826: INFO: stdout: ""
Feb 22 14:39:01.826: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-mr6k2 redis-master --namespace=kubectl-4904 --since=24h'
Feb 22 14:39:01.942: INFO: stderr: ""
Feb 22 14:39:01.942: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 22 Feb 14:38:57.077 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 22 Feb 14:38:57.078 # Server started, Redis version 3.2.12\n1:M 22 Feb 14:38:57.078 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 22 Feb 14:38:57.078 * The server is now ready to accept connections on port 6379\n"
[AfterEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298
STEP: using delete to clean up resources
Feb 22 14:39:01.942: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4904'
Feb 22 14:39:02.157: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 22 14:39:02.157: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n"
Feb 22 14:39:02.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=kubectl-4904'
Feb 22 14:39:02.426: INFO: stderr: "No resources found.\n"
Feb 22 14:39:02.427: INFO: stdout: ""
Feb 22 14:39:02.427: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=kubectl-4904 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb 22 14:39:02.540: INFO: stderr: ""
Feb 22 14:39:02.540: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 14:39:02.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4904" for this suite.
Feb 22 14:39:24.587: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 14:39:24.704: INFO: namespace kubectl-4904 deletion completed in 22.156529931s

• [SLOW TEST:35.768 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be able to retrieve and filter logs  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 14:39:24.705: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Feb 22 14:39:24.829: INFO: Waiting up to 5m0s for pod "downward-api-6ea9ea9e-a9f8-438e-9845-5dcc8e390675" in namespace "downward-api-850" to be "success or failure"
Feb 22 14:39:24.846: INFO: Pod "downward-api-6ea9ea9e-a9f8-438e-9845-5dcc8e390675": Phase="Pending", Reason="", readiness=false. Elapsed: 15.806131ms
Feb 22 14:39:26.869: INFO: Pod "downward-api-6ea9ea9e-a9f8-438e-9845-5dcc8e390675": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039528843s
Feb 22 14:39:28.877: INFO: Pod "downward-api-6ea9ea9e-a9f8-438e-9845-5dcc8e390675": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047423707s
Feb 22 14:39:31.862: INFO: Pod "downward-api-6ea9ea9e-a9f8-438e-9845-5dcc8e390675": Phase="Pending", Reason="", readiness=false. Elapsed: 7.032496474s
Feb 22 14:39:33.876: INFO: Pod "downward-api-6ea9ea9e-a9f8-438e-9845-5dcc8e390675": Phase="Pending", Reason="", readiness=false. Elapsed: 9.046498054s
Feb 22 14:39:35.908: INFO: Pod "downward-api-6ea9ea9e-a9f8-438e-9845-5dcc8e390675": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.077906105s
STEP: Saw pod success
Feb 22 14:39:35.908: INFO: Pod "downward-api-6ea9ea9e-a9f8-438e-9845-5dcc8e390675" satisfied condition "success or failure"
Feb 22 14:39:35.924: INFO: Trying to get logs from node iruya-node pod downward-api-6ea9ea9e-a9f8-438e-9845-5dcc8e390675 container dapi-container: 
STEP: delete the pod
Feb 22 14:39:36.267: INFO: Waiting for pod downward-api-6ea9ea9e-a9f8-438e-9845-5dcc8e390675 to disappear
Feb 22 14:39:36.280: INFO: Pod downward-api-6ea9ea9e-a9f8-438e-9845-5dcc8e390675 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 14:39:36.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-850" for this suite.
Feb 22 14:39:42.311: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 14:39:42.435: INFO: namespace downward-api-850 deletion completed in 6.149714032s

• [SLOW TEST:17.730 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 14:39:42.435: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 22 14:39:42.562: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fd2df4cb-0233-4f8a-b030-6884ff7191eb" in namespace "downward-api-8156" to be "success or failure"
Feb 22 14:39:42.580: INFO: Pod "downwardapi-volume-fd2df4cb-0233-4f8a-b030-6884ff7191eb": Phase="Pending", Reason="", readiness=false. Elapsed: 17.232334ms
Feb 22 14:39:44.606: INFO: Pod "downwardapi-volume-fd2df4cb-0233-4f8a-b030-6884ff7191eb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043615333s
Feb 22 14:39:46.663: INFO: Pod "downwardapi-volume-fd2df4cb-0233-4f8a-b030-6884ff7191eb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.100711262s
Feb 22 14:39:51.568: INFO: Pod "downwardapi-volume-fd2df4cb-0233-4f8a-b030-6884ff7191eb": Phase="Pending", Reason="", readiness=false. Elapsed: 9.005541813s
Feb 22 14:39:53.585: INFO: Pod "downwardapi-volume-fd2df4cb-0233-4f8a-b030-6884ff7191eb": Phase="Pending", Reason="", readiness=false. Elapsed: 11.022432473s
Feb 22 14:39:55.596: INFO: Pod "downwardapi-volume-fd2df4cb-0233-4f8a-b030-6884ff7191eb": Phase="Pending", Reason="", readiness=false. Elapsed: 13.034020063s
Feb 22 14:39:57.606: INFO: Pod "downwardapi-volume-fd2df4cb-0233-4f8a-b030-6884ff7191eb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.04328676s
STEP: Saw pod success
Feb 22 14:39:57.606: INFO: Pod "downwardapi-volume-fd2df4cb-0233-4f8a-b030-6884ff7191eb" satisfied condition "success or failure"
Feb 22 14:39:57.611: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-fd2df4cb-0233-4f8a-b030-6884ff7191eb container client-container: 
STEP: delete the pod
Feb 22 14:39:57.787: INFO: Waiting for pod downwardapi-volume-fd2df4cb-0233-4f8a-b030-6884ff7191eb to disappear
Feb 22 14:39:57.802: INFO: Pod downwardapi-volume-fd2df4cb-0233-4f8a-b030-6884ff7191eb no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 14:39:57.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8156" for this suite.
Feb 22 14:40:03.872: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 14:40:04.050: INFO: namespace downward-api-8156 deletion completed in 6.211122706s

• [SLOW TEST:21.615 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 14:40:04.052: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod liveness-cac9b5da-a0d8-4396-95af-3c1046ef9b4b in namespace container-probe-8626
Feb 22 14:40:12.242: INFO: Started pod liveness-cac9b5da-a0d8-4396-95af-3c1046ef9b4b in namespace container-probe-8626
STEP: checking the pod's current state and verifying that restartCount is present
Feb 22 14:40:12.246: INFO: Initial restart count of pod liveness-cac9b5da-a0d8-4396-95af-3c1046ef9b4b is 0
Feb 22 14:40:32.384: INFO: Restart count of pod container-probe-8626/liveness-cac9b5da-a0d8-4396-95af-3c1046ef9b4b is now 1 (20.13732822s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 14:40:32.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-8626" for this suite.
Feb 22 14:40:38.501: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 14:40:38.659: INFO: namespace container-probe-8626 deletion completed in 6.181732965s

• [SLOW TEST:34.608 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 14:40:38.660: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with downward pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-downwardapi-fwjp
STEP: Creating a pod to test atomic-volume-subpath
Feb 22 14:40:38.814: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-fwjp" in namespace "subpath-6583" to be "success or failure"
Feb 22 14:40:38.828: INFO: Pod "pod-subpath-test-downwardapi-fwjp": Phase="Pending", Reason="", readiness=false. Elapsed: 14.508671ms
Feb 22 14:40:40.836: INFO: Pod "pod-subpath-test-downwardapi-fwjp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022798757s
Feb 22 14:40:42.852: INFO: Pod "pod-subpath-test-downwardapi-fwjp": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037926488s
Feb 22 14:40:44.871: INFO: Pod "pod-subpath-test-downwardapi-fwjp": Phase="Pending", Reason="", readiness=false. Elapsed: 6.056918413s
Feb 22 14:40:47.028: INFO: Pod "pod-subpath-test-downwardapi-fwjp": Phase="Pending", Reason="", readiness=false. Elapsed: 8.214381894s
Feb 22 14:40:49.037: INFO: Pod "pod-subpath-test-downwardapi-fwjp": Phase="Running", Reason="", readiness=true. Elapsed: 10.222803498s
Feb 22 14:40:51.044: INFO: Pod "pod-subpath-test-downwardapi-fwjp": Phase="Running", Reason="", readiness=true. Elapsed: 12.230631982s
Feb 22 14:40:53.060: INFO: Pod "pod-subpath-test-downwardapi-fwjp": Phase="Running", Reason="", readiness=true. Elapsed: 14.246241432s
Feb 22 14:40:55.072: INFO: Pod "pod-subpath-test-downwardapi-fwjp": Phase="Running", Reason="", readiness=true. Elapsed: 16.25877919s
Feb 22 14:40:57.082: INFO: Pod "pod-subpath-test-downwardapi-fwjp": Phase="Running", Reason="", readiness=true. Elapsed: 18.268164309s
Feb 22 14:40:59.096: INFO: Pod "pod-subpath-test-downwardapi-fwjp": Phase="Running", Reason="", readiness=true. Elapsed: 20.282372318s
Feb 22 14:41:01.104: INFO: Pod "pod-subpath-test-downwardapi-fwjp": Phase="Running", Reason="", readiness=true. Elapsed: 22.290449332s
Feb 22 14:41:03.116: INFO: Pod "pod-subpath-test-downwardapi-fwjp": Phase="Running", Reason="", readiness=true. Elapsed: 24.301981619s
Feb 22 14:41:05.122: INFO: Pod "pod-subpath-test-downwardapi-fwjp": Phase="Running", Reason="", readiness=true. Elapsed: 26.308794251s
Feb 22 14:41:07.133: INFO: Pod "pod-subpath-test-downwardapi-fwjp": Phase="Running", Reason="", readiness=true. Elapsed: 28.318843251s
Feb 22 14:41:09.142: INFO: Pod "pod-subpath-test-downwardapi-fwjp": Phase="Running", Reason="", readiness=true. Elapsed: 30.328297969s
Feb 22 14:41:11.150: INFO: Pod "pod-subpath-test-downwardapi-fwjp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 32.336505488s
STEP: Saw pod success
Feb 22 14:41:11.151: INFO: Pod "pod-subpath-test-downwardapi-fwjp" satisfied condition "success or failure"
Feb 22 14:41:11.154: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-downwardapi-fwjp container test-container-subpath-downwardapi-fwjp: 
STEP: delete the pod
Feb 22 14:41:11.378: INFO: Waiting for pod pod-subpath-test-downwardapi-fwjp to disappear
Feb 22 14:41:11.386: INFO: Pod pod-subpath-test-downwardapi-fwjp no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-fwjp
Feb 22 14:41:11.386: INFO: Deleting pod "pod-subpath-test-downwardapi-fwjp" in namespace "subpath-6583"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 14:41:11.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-6583" for this suite.
Feb 22 14:41:17.514: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 14:41:17.652: INFO: namespace subpath-6583 deletion completed in 6.243359667s

• [SLOW TEST:38.993 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with downward pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 14:41:17.653: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Feb 22 14:41:28.356: INFO: Successfully updated pod "pod-update-activedeadlineseconds-5169a4ad-00d3-44b3-adbd-6f7a513e4061"
Feb 22 14:41:28.357: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-5169a4ad-00d3-44b3-adbd-6f7a513e4061" in namespace "pods-5670" to be "terminated due to deadline exceeded"
Feb 22 14:41:28.365: INFO: Pod "pod-update-activedeadlineseconds-5169a4ad-00d3-44b3-adbd-6f7a513e4061": Phase="Running", Reason="", readiness=true. Elapsed: 8.038146ms
Feb 22 14:41:30.377: INFO: Pod "pod-update-activedeadlineseconds-5169a4ad-00d3-44b3-adbd-6f7a513e4061": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.020138165s
Feb 22 14:41:30.378: INFO: Pod "pod-update-activedeadlineseconds-5169a4ad-00d3-44b3-adbd-6f7a513e4061" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 14:41:30.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-5670" for this suite.
Feb 22 14:41:36.428: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 14:41:36.605: INFO: namespace pods-5670 deletion completed in 6.219742795s

• [SLOW TEST:18.952 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 14:41:36.606: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: validating cluster-info
Feb 22 14:41:36.717: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info'
Feb 22 14:41:38.711: INFO: stderr: ""
Feb 22 14:41:38.711: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.57:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.57:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 14:41:38.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1424" for this suite.
Feb 22 14:41:44.744: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 14:41:44.916: INFO: namespace kubectl-1424 deletion completed in 6.195931934s

• [SLOW TEST:8.310 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl cluster-info
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if Kubernetes master services is included in cluster-info  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 14:41:44.916: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-23577a23-2304-4fc8-b586-5c1c32f4992d
STEP: Creating a pod to test consume configMaps
Feb 22 14:41:45.133: INFO: Waiting up to 5m0s for pod "pod-configmaps-f4b66cbb-3df1-4737-b274-ad2f759d54a4" in namespace "configmap-5360" to be "success or failure"
Feb 22 14:41:45.142: INFO: Pod "pod-configmaps-f4b66cbb-3df1-4737-b274-ad2f759d54a4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.166635ms
Feb 22 14:41:47.150: INFO: Pod "pod-configmaps-f4b66cbb-3df1-4737-b274-ad2f759d54a4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016744449s
Feb 22 14:41:49.166: INFO: Pod "pod-configmaps-f4b66cbb-3df1-4737-b274-ad2f759d54a4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032142212s
Feb 22 14:41:51.174: INFO: Pod "pod-configmaps-f4b66cbb-3df1-4737-b274-ad2f759d54a4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.040161246s
Feb 22 14:41:53.189: INFO: Pod "pod-configmaps-f4b66cbb-3df1-4737-b274-ad2f759d54a4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.055057612s
Feb 22 14:41:55.196: INFO: Pod "pod-configmaps-f4b66cbb-3df1-4737-b274-ad2f759d54a4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.062446029s
STEP: Saw pod success
Feb 22 14:41:55.196: INFO: Pod "pod-configmaps-f4b66cbb-3df1-4737-b274-ad2f759d54a4" satisfied condition "success or failure"
Feb 22 14:41:55.200: INFO: Trying to get logs from node iruya-node pod pod-configmaps-f4b66cbb-3df1-4737-b274-ad2f759d54a4 container configmap-volume-test: 
STEP: delete the pod
Feb 22 14:41:55.326: INFO: Waiting for pod pod-configmaps-f4b66cbb-3df1-4737-b274-ad2f759d54a4 to disappear
Feb 22 14:41:55.333: INFO: Pod pod-configmaps-f4b66cbb-3df1-4737-b274-ad2f759d54a4 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 14:41:55.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5360" for this suite.
Feb 22 14:42:01.378: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 14:42:01.559: INFO: namespace configmap-5360 deletion completed in 6.215849897s

• [SLOW TEST:16.643 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 14:42:01.560: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on node default medium
Feb 22 14:42:01.733: INFO: Waiting up to 5m0s for pod "pod-2159a6a0-9a1b-4fd5-a5b8-3c741982ae7b" in namespace "emptydir-2667" to be "success or failure"
Feb 22 14:42:01.751: INFO: Pod "pod-2159a6a0-9a1b-4fd5-a5b8-3c741982ae7b": Phase="Pending", Reason="", readiness=false. Elapsed: 17.759666ms
Feb 22 14:42:03.826: INFO: Pod "pod-2159a6a0-9a1b-4fd5-a5b8-3c741982ae7b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.092443494s
Feb 22 14:42:05.852: INFO: Pod "pod-2159a6a0-9a1b-4fd5-a5b8-3c741982ae7b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.118415365s
Feb 22 14:42:07.866: INFO: Pod "pod-2159a6a0-9a1b-4fd5-a5b8-3c741982ae7b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.132417733s
Feb 22 14:42:09.887: INFO: Pod "pod-2159a6a0-9a1b-4fd5-a5b8-3c741982ae7b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.153686259s
STEP: Saw pod success
Feb 22 14:42:09.887: INFO: Pod "pod-2159a6a0-9a1b-4fd5-a5b8-3c741982ae7b" satisfied condition "success or failure"
Feb 22 14:42:09.891: INFO: Trying to get logs from node iruya-node pod pod-2159a6a0-9a1b-4fd5-a5b8-3c741982ae7b container test-container: 
STEP: delete the pod
Feb 22 14:42:09.962: INFO: Waiting for pod pod-2159a6a0-9a1b-4fd5-a5b8-3c741982ae7b to disappear
Feb 22 14:42:09.974: INFO: Pod pod-2159a6a0-9a1b-4fd5-a5b8-3c741982ae7b no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 14:42:09.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2667" for this suite.
Feb 22 14:42:16.059: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 14:42:16.174: INFO: namespace emptydir-2667 deletion completed in 6.193257425s

• [SLOW TEST:14.614 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 14:42:16.174: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 22 14:42:16.256: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 14:42:17.481: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-6872" for this suite.
Feb 22 14:42:23.550: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 14:42:23.684: INFO: namespace custom-resource-definition-6872 deletion completed in 6.194286139s

• [SLOW TEST:7.511 seconds]
[sig-api-machinery] CustomResourceDefinition resources
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Simple CustomResourceDefinition
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35
    creating/deleting custom resource definition objects works  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 14:42:23.686: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: getting the auto-created API token
Feb 22 14:42:24.394: INFO: created pod pod-service-account-defaultsa
Feb 22 14:42:24.394: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Feb 22 14:42:24.432: INFO: created pod pod-service-account-mountsa
Feb 22 14:42:24.432: INFO: pod pod-service-account-mountsa service account token volume mount: true
Feb 22 14:42:24.443: INFO: created pod pod-service-account-nomountsa
Feb 22 14:42:24.443: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Feb 22 14:42:24.525: INFO: created pod pod-service-account-defaultsa-mountspec
Feb 22 14:42:24.525: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Feb 22 14:42:24.585: INFO: created pod pod-service-account-mountsa-mountspec
Feb 22 14:42:24.585: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Feb 22 14:42:25.124: INFO: created pod pod-service-account-nomountsa-mountspec
Feb 22 14:42:25.125: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Feb 22 14:42:25.225: INFO: created pod pod-service-account-defaultsa-nomountspec
Feb 22 14:42:25.225: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Feb 22 14:42:25.523: INFO: created pod pod-service-account-mountsa-nomountspec
Feb 22 14:42:25.524: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Feb 22 14:42:26.163: INFO: created pod pod-service-account-nomountsa-nomountspec
Feb 22 14:42:26.163: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 14:42:26.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-8379" for this suite.
Feb 22 14:42:53.478: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 14:42:53.609: INFO: namespace svcaccounts-8379 deletion completed in 27.232754447s

• [SLOW TEST:29.923 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 14:42:53.610: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-secret-s8tz
STEP: Creating a pod to test atomic-volume-subpath
Feb 22 14:42:53.803: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-s8tz" in namespace "subpath-8648" to be "success or failure"
Feb 22 14:42:53.821: INFO: Pod "pod-subpath-test-secret-s8tz": Phase="Pending", Reason="", readiness=false. Elapsed: 16.991498ms
Feb 22 14:42:55.834: INFO: Pod "pod-subpath-test-secret-s8tz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029578082s
Feb 22 14:42:57.844: INFO: Pod "pod-subpath-test-secret-s8tz": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040481694s
Feb 22 14:42:59.857: INFO: Pod "pod-subpath-test-secret-s8tz": Phase="Pending", Reason="", readiness=false. Elapsed: 6.053037246s
Feb 22 14:43:01.865: INFO: Pod "pod-subpath-test-secret-s8tz": Phase="Pending", Reason="", readiness=false. Elapsed: 8.061022016s
Feb 22 14:43:03.879: INFO: Pod "pod-subpath-test-secret-s8tz": Phase="Running", Reason="", readiness=true. Elapsed: 10.075004918s
Feb 22 14:43:05.890: INFO: Pod "pod-subpath-test-secret-s8tz": Phase="Running", Reason="", readiness=true. Elapsed: 12.08628591s
Feb 22 14:43:07.904: INFO: Pod "pod-subpath-test-secret-s8tz": Phase="Running", Reason="", readiness=true. Elapsed: 14.100335505s
Feb 22 14:43:09.915: INFO: Pod "pod-subpath-test-secret-s8tz": Phase="Running", Reason="", readiness=true. Elapsed: 16.110823445s
Feb 22 14:43:11.924: INFO: Pod "pod-subpath-test-secret-s8tz": Phase="Running", Reason="", readiness=true. Elapsed: 18.119584096s
Feb 22 14:43:14.206: INFO: Pod "pod-subpath-test-secret-s8tz": Phase="Running", Reason="", readiness=true. Elapsed: 20.40172422s
Feb 22 14:43:16.242: INFO: Pod "pod-subpath-test-secret-s8tz": Phase="Running", Reason="", readiness=true. Elapsed: 22.438197701s
Feb 22 14:43:18.251: INFO: Pod "pod-subpath-test-secret-s8tz": Phase="Running", Reason="", readiness=true. Elapsed: 24.447398202s
Feb 22 14:43:20.261: INFO: Pod "pod-subpath-test-secret-s8tz": Phase="Running", Reason="", readiness=true. Elapsed: 26.457311318s
Feb 22 14:43:22.272: INFO: Pod "pod-subpath-test-secret-s8tz": Phase="Running", Reason="", readiness=true. Elapsed: 28.468099568s
Feb 22 14:43:24.297: INFO: Pod "pod-subpath-test-secret-s8tz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.492708845s
STEP: Saw pod success
Feb 22 14:43:24.297: INFO: Pod "pod-subpath-test-secret-s8tz" satisfied condition "success or failure"
Feb 22 14:43:24.306: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-secret-s8tz container test-container-subpath-secret-s8tz: 
STEP: delete the pod
Feb 22 14:43:24.458: INFO: Waiting for pod pod-subpath-test-secret-s8tz to disappear
Feb 22 14:43:24.464: INFO: Pod pod-subpath-test-secret-s8tz no longer exists
STEP: Deleting pod pod-subpath-test-secret-s8tz
Feb 22 14:43:24.464: INFO: Deleting pod "pod-subpath-test-secret-s8tz" in namespace "subpath-8648"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 14:43:24.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-8648" for this suite.
Feb 22 14:43:30.497: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 14:43:30.874: INFO: namespace subpath-8648 deletion completed in 6.402342995s

• [SLOW TEST:37.264 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with secret pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 14:43:30.875: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on node default medium
Feb 22 14:43:31.066: INFO: Waiting up to 5m0s for pod "pod-5cb3153d-808a-4c21-8b3d-6238900ca9e9" in namespace "emptydir-9873" to be "success or failure"
Feb 22 14:43:31.082: INFO: Pod "pod-5cb3153d-808a-4c21-8b3d-6238900ca9e9": Phase="Pending", Reason="", readiness=false. Elapsed: 15.359728ms
Feb 22 14:43:33.088: INFO: Pod "pod-5cb3153d-808a-4c21-8b3d-6238900ca9e9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022006018s
Feb 22 14:43:37.346: INFO: Pod "pod-5cb3153d-808a-4c21-8b3d-6238900ca9e9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.280103104s
Feb 22 14:43:39.357: INFO: Pod "pod-5cb3153d-808a-4c21-8b3d-6238900ca9e9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.290392355s
Feb 22 14:43:41.366: INFO: Pod "pod-5cb3153d-808a-4c21-8b3d-6238900ca9e9": Phase="Running", Reason="", readiness=true. Elapsed: 10.299790004s
Feb 22 14:43:43.375: INFO: Pod "pod-5cb3153d-808a-4c21-8b3d-6238900ca9e9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.308744594s
STEP: Saw pod success
Feb 22 14:43:43.375: INFO: Pod "pod-5cb3153d-808a-4c21-8b3d-6238900ca9e9" satisfied condition "success or failure"
Feb 22 14:43:43.379: INFO: Trying to get logs from node iruya-node pod pod-5cb3153d-808a-4c21-8b3d-6238900ca9e9 container test-container: 
STEP: delete the pod
Feb 22 14:43:43.472: INFO: Waiting for pod pod-5cb3153d-808a-4c21-8b3d-6238900ca9e9 to disappear
Feb 22 14:43:43.525: INFO: Pod pod-5cb3153d-808a-4c21-8b3d-6238900ca9e9 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 14:43:43.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9873" for this suite.
Feb 22 14:43:49.596: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 14:43:49.740: INFO: namespace emptydir-9873 deletion completed in 6.199857556s

• [SLOW TEST:18.865 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 14:43:49.740: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a replication controller
Feb 22 14:43:49.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6505'
Feb 22 14:43:50.267: INFO: stderr: ""
Feb 22 14:43:50.267: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb 22 14:43:50.267: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6505'
Feb 22 14:43:50.535: INFO: stderr: ""
Feb 22 14:43:50.535: INFO: stdout: "update-demo-nautilus-8q4cx update-demo-nautilus-vqbth "
Feb 22 14:43:50.536: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8q4cx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6505'
Feb 22 14:43:50.646: INFO: stderr: ""
Feb 22 14:43:50.647: INFO: stdout: ""
Feb 22 14:43:50.647: INFO: update-demo-nautilus-8q4cx is created but not running
Feb 22 14:43:55.647: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6505'
Feb 22 14:43:57.131: INFO: stderr: ""
Feb 22 14:43:57.131: INFO: stdout: "update-demo-nautilus-8q4cx update-demo-nautilus-vqbth "
Feb 22 14:43:57.131: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8q4cx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6505'
Feb 22 14:43:57.717: INFO: stderr: ""
Feb 22 14:43:57.718: INFO: stdout: ""
Feb 22 14:43:57.718: INFO: update-demo-nautilus-8q4cx is created but not running
Feb 22 14:44:02.718: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6505'
Feb 22 14:44:02.872: INFO: stderr: ""
Feb 22 14:44:02.873: INFO: stdout: "update-demo-nautilus-8q4cx update-demo-nautilus-vqbth "
Feb 22 14:44:02.873: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8q4cx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6505'
Feb 22 14:44:02.985: INFO: stderr: ""
Feb 22 14:44:02.986: INFO: stdout: ""
Feb 22 14:44:02.986: INFO: update-demo-nautilus-8q4cx is created but not running
Feb 22 14:44:07.986: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6505'
Feb 22 14:44:08.180: INFO: stderr: ""
Feb 22 14:44:08.180: INFO: stdout: "update-demo-nautilus-8q4cx update-demo-nautilus-vqbth "
Feb 22 14:44:08.180: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8q4cx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6505'
Feb 22 14:44:08.473: INFO: stderr: ""
Feb 22 14:44:08.473: INFO: stdout: "true"
Feb 22 14:44:08.474: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8q4cx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6505'
Feb 22 14:44:08.616: INFO: stderr: ""
Feb 22 14:44:08.616: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 22 14:44:08.616: INFO: validating pod update-demo-nautilus-8q4cx
Feb 22 14:44:08.665: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 22 14:44:08.665: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 22 14:44:08.665: INFO: update-demo-nautilus-8q4cx is verified up and running
Feb 22 14:44:08.666: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vqbth -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6505'
Feb 22 14:44:08.758: INFO: stderr: ""
Feb 22 14:44:08.758: INFO: stdout: "true"
Feb 22 14:44:08.759: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vqbth -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6505'
Feb 22 14:44:08.844: INFO: stderr: ""
Feb 22 14:44:08.845: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 22 14:44:08.845: INFO: validating pod update-demo-nautilus-vqbth
Feb 22 14:44:08.918: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 22 14:44:08.919: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 22 14:44:08.919: INFO: update-demo-nautilus-vqbth is verified up and running
STEP: using delete to clean up resources
Feb 22 14:44:08.919: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6505'
Feb 22 14:44:09.067: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 22 14:44:09.067: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Feb 22 14:44:09.067: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-6505'
Feb 22 14:44:09.424: INFO: stderr: "No resources found.\n"
Feb 22 14:44:09.424: INFO: stdout: ""
Feb 22 14:44:09.425: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-6505 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb 22 14:44:09.534: INFO: stderr: ""
Feb 22 14:44:09.535: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 14:44:09.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6505" for this suite.
Feb 22 14:44:31.575: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 14:44:31.698: INFO: namespace kubectl-6505 deletion completed in 22.147300884s

• [SLOW TEST:41.958 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create and stop a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 14:44:31.699: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-5711
[It] Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating stateful set ss in namespace statefulset-5711
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-5711
Feb 22 14:44:31.858: INFO: Found 0 stateful pods, waiting for 1
Feb 22 14:44:41.871: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Feb 22 14:44:41.881: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5711 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 22 14:44:42.808: INFO: stderr: "I0222 14:44:42.088103    2179 log.go:172] (0xc0001166e0) (0xc00033c8c0) Create stream\nI0222 14:44:42.088187    2179 log.go:172] (0xc0001166e0) (0xc00033c8c0) Stream added, broadcasting: 1\nI0222 14:44:42.096634    2179 log.go:172] (0xc0001166e0) Reply frame received for 1\nI0222 14:44:42.096658    2179 log.go:172] (0xc0001166e0) (0xc000788000) Create stream\nI0222 14:44:42.096678    2179 log.go:172] (0xc0001166e0) (0xc000788000) Stream added, broadcasting: 3\nI0222 14:44:42.097936    2179 log.go:172] (0xc0001166e0) Reply frame received for 3\nI0222 14:44:42.097976    2179 log.go:172] (0xc0001166e0) (0xc00028c000) Create stream\nI0222 14:44:42.098006    2179 log.go:172] (0xc0001166e0) (0xc00028c000) Stream added, broadcasting: 5\nI0222 14:44:42.099300    2179 log.go:172] (0xc0001166e0) Reply frame received for 5\nI0222 14:44:42.341776    2179 log.go:172] (0xc0001166e0) Data frame received for 5\nI0222 14:44:42.342309    2179 log.go:172] (0xc00028c000) (5) Data frame handling\nI0222 14:44:42.342393    2179 log.go:172] (0xc00028c000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0222 14:44:42.424917    2179 log.go:172] (0xc0001166e0) Data frame received for 3\nI0222 14:44:42.425478    2179 log.go:172] (0xc000788000) (3) Data frame handling\nI0222 14:44:42.425585    2179 log.go:172] (0xc000788000) (3) Data frame sent\nI0222 14:44:42.792926    2179 log.go:172] (0xc0001166e0) (0xc000788000) Stream removed, broadcasting: 3\nI0222 14:44:42.793727    2179 log.go:172] (0xc0001166e0) Data frame received for 1\nI0222 14:44:42.793752    2179 log.go:172] (0xc0001166e0) (0xc00028c000) Stream removed, broadcasting: 5\nI0222 14:44:42.793801    2179 log.go:172] (0xc00033c8c0) (1) Data frame handling\nI0222 14:44:42.793819    2179 log.go:172] (0xc00033c8c0) (1) Data frame sent\nI0222 14:44:42.793858    2179 log.go:172] (0xc0001166e0) (0xc00033c8c0) Stream removed, broadcasting: 1\nI0222 14:44:42.793898    2179 log.go:172] (0xc0001166e0) Go away received\nI0222 14:44:42.794644    2179 log.go:172] (0xc0001166e0) (0xc00033c8c0) Stream removed, broadcasting: 1\nI0222 14:44:42.794708    2179 log.go:172] (0xc0001166e0) (0xc000788000) Stream removed, broadcasting: 3\nI0222 14:44:42.794741    2179 log.go:172] (0xc0001166e0) (0xc00028c000) Stream removed, broadcasting: 5\n"
Feb 22 14:44:42.808: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 22 14:44:42.808: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 22 14:44:42.821: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Feb 22 14:44:52.875: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb 22 14:44:52.876: INFO: Waiting for statefulset status.replicas updated to 0
Feb 22 14:44:52.898: INFO: POD   NODE        PHASE    GRACE  CONDITIONS
Feb 22 14:44:52.898: INFO: ss-0  iruya-node  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-22 14:44:31 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-22 14:44:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-22 14:44:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-22 14:44:31 +0000 UTC  }]
Feb 22 14:44:52.898: INFO: 
Feb 22 14:44:52.898: INFO: StatefulSet ss has not reached scale 3, at 1
Feb 22 14:44:54.788: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.991154381s
Feb 22 14:44:55.799: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.101195229s
Feb 22 14:44:56.811: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.090807449s
Feb 22 14:44:57.817: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.07832339s
Feb 22 14:44:59.689: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.07223197s
Feb 22 14:45:00.708: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.199839909s
Feb 22 14:45:01.779: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.181720152s
Feb 22 14:45:02.796: INFO: Verifying statefulset ss doesn't scale past 3 for another 110.282514ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-5711
Feb 22 14:45:03.812: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5711 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 22 14:45:04.676: INFO: stderr: "I0222 14:45:04.152335    2201 log.go:172] (0xc0009bc2c0) (0xc0008f25a0) Create stream\nI0222 14:45:04.152588    2201 log.go:172] (0xc0009bc2c0) (0xc0008f25a0) Stream added, broadcasting: 1\nI0222 14:45:04.171475    2201 log.go:172] (0xc0009bc2c0) Reply frame received for 1\nI0222 14:45:04.172144    2201 log.go:172] (0xc0009bc2c0) (0xc0005c8280) Create stream\nI0222 14:45:04.172276    2201 log.go:172] (0xc0009bc2c0) (0xc0005c8280) Stream added, broadcasting: 3\nI0222 14:45:04.181631    2201 log.go:172] (0xc0009bc2c0) Reply frame received for 3\nI0222 14:45:04.181938    2201 log.go:172] (0xc0009bc2c0) (0xc0008f2640) Create stream\nI0222 14:45:04.181994    2201 log.go:172] (0xc0009bc2c0) (0xc0008f2640) Stream added, broadcasting: 5\nI0222 14:45:04.189361    2201 log.go:172] (0xc0009bc2c0) Reply frame received for 5\nI0222 14:45:04.372181    2201 log.go:172] (0xc0009bc2c0) Data frame received for 5\nI0222 14:45:04.372264    2201 log.go:172] (0xc0008f2640) (5) Data frame handling\nI0222 14:45:04.372289    2201 log.go:172] (0xc0008f2640) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0222 14:45:04.372440    2201 log.go:172] (0xc0009bc2c0) Data frame received for 3\nI0222 14:45:04.372557    2201 log.go:172] (0xc0005c8280) (3) Data frame handling\nI0222 14:45:04.372613    2201 log.go:172] (0xc0005c8280) (3) Data frame sent\nI0222 14:45:04.659331    2201 log.go:172] (0xc0009bc2c0) (0xc0005c8280) Stream removed, broadcasting: 3\nI0222 14:45:04.659422    2201 log.go:172] (0xc0009bc2c0) Data frame received for 1\nI0222 14:45:04.659443    2201 log.go:172] (0xc0008f25a0) (1) Data frame handling\nI0222 14:45:04.659470    2201 log.go:172] (0xc0008f25a0) (1) Data frame sent\nI0222 14:45:04.659497    2201 log.go:172] (0xc0009bc2c0) (0xc0008f25a0) Stream removed, broadcasting: 1\nI0222 14:45:04.659530    2201 log.go:172] (0xc0009bc2c0) (0xc0008f2640) Stream removed, broadcasting: 5\nI0222 14:45:04.659585    2201 log.go:172] (0xc0009bc2c0) Go away received\nI0222 14:45:04.660427    2201 log.go:172] (0xc0009bc2c0) (0xc0008f25a0) Stream removed, broadcasting: 1\nI0222 14:45:04.660525    2201 log.go:172] (0xc0009bc2c0) (0xc0005c8280) Stream removed, broadcasting: 3\nI0222 14:45:04.660575    2201 log.go:172] (0xc0009bc2c0) (0xc0008f2640) Stream removed, broadcasting: 5\n"
Feb 22 14:45:04.677: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb 22 14:45:04.677: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb 22 14:45:04.677: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5711 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 22 14:45:05.220: INFO: stderr: "I0222 14:45:04.891358    2223 log.go:172] (0xc0008b82c0) (0xc0007b86e0) Create stream\nI0222 14:45:04.891551    2223 log.go:172] (0xc0008b82c0) (0xc0007b86e0) Stream added, broadcasting: 1\nI0222 14:45:04.896547    2223 log.go:172] (0xc0008b82c0) Reply frame received for 1\nI0222 14:45:04.896637    2223 log.go:172] (0xc0008b82c0) (0xc00065c140) Create stream\nI0222 14:45:04.896651    2223 log.go:172] (0xc0008b82c0) (0xc00065c140) Stream added, broadcasting: 3\nI0222 14:45:04.898449    2223 log.go:172] (0xc0008b82c0) Reply frame received for 3\nI0222 14:45:04.898474    2223 log.go:172] (0xc0008b82c0) (0xc0007b8780) Create stream\nI0222 14:45:04.898480    2223 log.go:172] (0xc0008b82c0) (0xc0007b8780) Stream added, broadcasting: 5\nI0222 14:45:04.899393    2223 log.go:172] (0xc0008b82c0) Reply frame received for 5\nI0222 14:45:05.096098    2223 log.go:172] (0xc0008b82c0) Data frame received for 5\nI0222 14:45:05.096134    2223 log.go:172] (0xc0007b8780) (5) Data frame handling\nI0222 14:45:05.096158    2223 log.go:172] (0xc0007b8780) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0222 14:45:05.139133    2223 log.go:172] (0xc0008b82c0) Data frame received for 5\nI0222 14:45:05.139175    2223 log.go:172] (0xc0007b8780) (5) Data frame handling\nI0222 14:45:05.139199    2223 log.go:172] (0xc0007b8780) (5) Data frame sent\nI0222 14:45:05.139212    2223 log.go:172] (0xc0008b82c0) Data frame received for 5\nI0222 14:45:05.139227    2223 log.go:172] (0xc0007b8780) (5) Data frame handling\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0222 14:45:05.139288    2223 log.go:172] (0xc0007b8780) (5) Data frame sent\nI0222 14:45:05.140362    2223 log.go:172] (0xc0008b82c0) Data frame received for 3\nI0222 14:45:05.140561    2223 log.go:172] (0xc00065c140) (3) Data frame handling\nI0222 14:45:05.140594    2223 log.go:172] (0xc00065c140) (3) Data frame sent\nI0222 14:45:05.211891    2223 log.go:172] (0xc0008b82c0) (0xc00065c140) Stream removed, broadcasting: 3\nI0222 14:45:05.211958    2223 log.go:172] (0xc0008b82c0) Data frame received for 1\nI0222 14:45:05.211975    2223 log.go:172] (0xc0007b86e0) (1) Data frame handling\nI0222 14:45:05.211987    2223 log.go:172] (0xc0007b86e0) (1) Data frame sent\nI0222 14:45:05.212052    2223 log.go:172] (0xc0008b82c0) (0xc0007b86e0) Stream removed, broadcasting: 1\nI0222 14:45:05.212524    2223 log.go:172] (0xc0008b82c0) (0xc0007b8780) Stream removed, broadcasting: 5\nI0222 14:45:05.212544    2223 log.go:172] (0xc0008b82c0) Go away received\nI0222 14:45:05.212653    2223 log.go:172] (0xc0008b82c0) (0xc0007b86e0) Stream removed, broadcasting: 1\nI0222 14:45:05.212699    2223 log.go:172] (0xc0008b82c0) (0xc00065c140) Stream removed, broadcasting: 3\nI0222 14:45:05.212711    2223 log.go:172] (0xc0008b82c0) (0xc0007b8780) Stream removed, broadcasting: 5\n"
Feb 22 14:45:05.220: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb 22 14:45:05.220: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb 22 14:45:05.220: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5711 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 22 14:45:05.697: INFO: stderr: "I0222 14:45:05.431224    2243 log.go:172] (0xc000116e70) (0xc00027e780) Create stream\nI0222 14:45:05.431356    2243 log.go:172] (0xc000116e70) (0xc00027e780) Stream added, broadcasting: 1\nI0222 14:45:05.437783    2243 log.go:172] (0xc000116e70) Reply frame received for 1\nI0222 14:45:05.437842    2243 log.go:172] (0xc000116e70) (0xc0007d6000) Create stream\nI0222 14:45:05.437875    2243 log.go:172] (0xc000116e70) (0xc0007d6000) Stream added, broadcasting: 3\nI0222 14:45:05.440250    2243 log.go:172] (0xc000116e70) Reply frame received for 3\nI0222 14:45:05.440377    2243 log.go:172] (0xc000116e70) (0xc00027e820) Create stream\nI0222 14:45:05.440404    2243 log.go:172] (0xc000116e70) (0xc00027e820) Stream added, broadcasting: 5\nI0222 14:45:05.442504    2243 log.go:172] (0xc000116e70) Reply frame received for 5\nI0222 14:45:05.550801    2243 log.go:172] (0xc000116e70) Data frame received for 5\nI0222 14:45:05.551017    2243 log.go:172] (0xc00027e820) (5) Data frame handling\nI0222 14:45:05.551057    2243 log.go:172] (0xc00027e820) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0222 14:45:05.551263    2243 log.go:172] (0xc000116e70) Data frame received for 5\nI0222 14:45:05.551303    2243 log.go:172] (0xc00027e820) (5) Data frame handling\nI0222 14:45:05.551331    2243 log.go:172] (0xc00027e820) (5) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0222 14:45:05.551398    2243 log.go:172] (0xc000116e70) Data frame received for 3\nI0222 14:45:05.551422    2243 log.go:172] (0xc0007d6000) (3) Data frame handling\nI0222 14:45:05.551498    2243 log.go:172] (0xc0007d6000) (3) Data frame sent\nI0222 14:45:05.686707    2243 log.go:172] (0xc000116e70) Data frame received for 1\nI0222 14:45:05.686959    2243 log.go:172] (0xc000116e70) (0xc0007d6000) Stream removed, broadcasting: 3\nI0222 14:45:05.687043    2243 log.go:172] (0xc00027e780) (1) Data frame handling\nI0222 14:45:05.687107    2243 log.go:172] (0xc000116e70) (0xc00027e820) Stream removed, broadcasting: 5\nI0222 14:45:05.687162    2243 log.go:172] (0xc00027e780) (1) Data frame sent\nI0222 14:45:05.687195    2243 log.go:172] (0xc000116e70) (0xc00027e780) Stream removed, broadcasting: 1\nI0222 14:45:05.688024    2243 log.go:172] (0xc000116e70) (0xc00027e780) Stream removed, broadcasting: 1\nI0222 14:45:05.688318    2243 log.go:172] (0xc000116e70) (0xc0007d6000) Stream removed, broadcasting: 3\nI0222 14:45:05.688341    2243 log.go:172] (0xc000116e70) (0xc00027e820) Stream removed, broadcasting: 5\nI0222 14:45:05.688421    2243 log.go:172] (0xc000116e70) Go away received\n"
Feb 22 14:45:05.698: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb 22 14:45:05.698: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb 22 14:45:05.712: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 22 14:45:05.712: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 22 14:45:05.712: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Feb 22 14:45:05.719: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5711 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 22 14:45:06.270: INFO: stderr: "I0222 14:45:05.958118    2265 log.go:172] (0xc00092c420) (0xc000602b40) Create stream\nI0222 14:45:05.958265    2265 log.go:172] (0xc00092c420) (0xc000602b40) Stream added, broadcasting: 1\nI0222 14:45:05.971498    2265 log.go:172] (0xc00092c420) Reply frame received for 1\nI0222 14:45:05.971576    2265 log.go:172] (0xc00092c420) (0xc0004800a0) Create stream\nI0222 14:45:05.971590    2265 log.go:172] (0xc00092c420) (0xc0004800a0) Stream added, broadcasting: 3\nI0222 14:45:05.975366    2265 log.go:172] (0xc00092c420) Reply frame received for 3\nI0222 14:45:05.975422    2265 log.go:172] (0xc00092c420) (0xc000602be0) Create stream\nI0222 14:45:05.975441    2265 log.go:172] (0xc00092c420) (0xc000602be0) Stream added, broadcasting: 5\nI0222 14:45:05.978517    2265 log.go:172] (0xc00092c420) Reply frame received for 5\nI0222 14:45:06.121400    2265 log.go:172] (0xc00092c420) Data frame received for 5\nI0222 14:45:06.121508    2265 log.go:172] (0xc000602be0) (5) Data frame handling\nI0222 14:45:06.121532    2265 log.go:172] (0xc000602be0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0222 14:45:06.121560    2265 log.go:172] (0xc00092c420) Data frame received for 3\nI0222 14:45:06.121573    2265 log.go:172] (0xc0004800a0) (3) Data frame handling\nI0222 14:45:06.121600    2265 log.go:172] (0xc0004800a0) (3) Data frame sent\nI0222 14:45:06.258194    2265 log.go:172] (0xc00092c420) (0xc0004800a0) Stream removed, broadcasting: 3\nI0222 14:45:06.258305    2265 log.go:172] (0xc00092c420) Data frame received for 1\nI0222 14:45:06.258347    2265 log.go:172] (0xc00092c420) (0xc000602be0) Stream removed, broadcasting: 5\nI0222 14:45:06.258614    2265 log.go:172] (0xc000602b40) (1) Data frame handling\nI0222 14:45:06.258653    2265 log.go:172] (0xc000602b40) (1) Data frame sent\nI0222 14:45:06.258668    2265 log.go:172] (0xc00092c420) (0xc000602b40) Stream removed, broadcasting: 1\nI0222 14:45:06.258688    2265 log.go:172] (0xc00092c420) Go away received\nI0222 14:45:06.259126    2265 log.go:172] (0xc00092c420) (0xc000602b40) Stream removed, broadcasting: 1\nI0222 14:45:06.259143    2265 log.go:172] (0xc00092c420) (0xc0004800a0) Stream removed, broadcasting: 3\nI0222 14:45:06.259155    2265 log.go:172] (0xc00092c420) (0xc000602be0) Stream removed, broadcasting: 5\n"
Feb 22 14:45:06.270: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 22 14:45:06.270: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 22 14:45:06.270: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5711 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 22 14:45:06.690: INFO: stderr: "I0222 14:45:06.463910    2284 log.go:172] (0xc000a28630) (0xc0007fec80) Create stream\nI0222 14:45:06.464168    2284 log.go:172] (0xc000a28630) (0xc0007fec80) Stream added, broadcasting: 1\nI0222 14:45:06.480444    2284 log.go:172] (0xc000a28630) Reply frame received for 1\nI0222 14:45:06.480535    2284 log.go:172] (0xc000a28630) (0xc0007fe3c0) Create stream\nI0222 14:45:06.480549    2284 log.go:172] (0xc000a28630) (0xc0007fe3c0) Stream added, broadcasting: 3\nI0222 14:45:06.481779    2284 log.go:172] (0xc000a28630) Reply frame received for 3\nI0222 14:45:06.481847    2284 log.go:172] (0xc000a28630) (0xc00016c000) Create stream\nI0222 14:45:06.481866    2284 log.go:172] (0xc000a28630) (0xc00016c000) Stream added, broadcasting: 5\nI0222 14:45:06.483649    2284 log.go:172] (0xc000a28630) Reply frame received for 5\nI0222 14:45:06.568941    2284 log.go:172] (0xc000a28630) Data frame received for 5\nI0222 14:45:06.569019    2284 log.go:172] (0xc00016c000) (5) Data frame handling\nI0222 14:45:06.569049    2284 log.go:172] (0xc00016c000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0222 14:45:06.582942    2284 log.go:172] (0xc000a28630) Data frame received for 3\nI0222 14:45:06.583008    2284 log.go:172] (0xc0007fe3c0) (3) Data frame handling\nI0222 14:45:06.583038    2284 log.go:172] (0xc0007fe3c0) (3) Data frame sent\nI0222 14:45:06.679835    2284 log.go:172] (0xc000a28630) Data frame received for 1\nI0222 14:45:06.679897    2284 log.go:172] (0xc000a28630) (0xc0007fe3c0) Stream removed, broadcasting: 3\nI0222 14:45:06.679987    2284 log.go:172] (0xc0007fec80) (1) Data frame handling\nI0222 14:45:06.680026    2284 log.go:172] (0xc000a28630) (0xc00016c000) Stream removed, broadcasting: 5\nI0222 14:45:06.680082    2284 log.go:172] (0xc0007fec80) (1) Data frame sent\nI0222 14:45:06.680091    2284 log.go:172] (0xc000a28630) (0xc0007fec80) Stream removed, broadcasting: 1\nI0222 14:45:06.680105    2284 log.go:172] (0xc000a28630) Go away received\nI0222 14:45:06.680721    2284 log.go:172] (0xc000a28630) (0xc0007fec80) Stream removed, broadcasting: 1\nI0222 14:45:06.680752    2284 log.go:172] (0xc000a28630) (0xc0007fe3c0) Stream removed, broadcasting: 3\nI0222 14:45:06.680787    2284 log.go:172] (0xc000a28630) (0xc00016c000) Stream removed, broadcasting: 5\n"
Feb 22 14:45:06.690: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 22 14:45:06.690: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 22 14:45:06.691: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5711 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 22 14:45:07.110: INFO: stderr: "I0222 14:45:06.894718    2304 log.go:172] (0xc000a22370) (0xc000a385a0) Create stream\nI0222 14:45:06.894808    2304 log.go:172] (0xc000a22370) (0xc000a385a0) Stream added, broadcasting: 1\nI0222 14:45:06.898732    2304 log.go:172] (0xc000a22370) Reply frame received for 1\nI0222 14:45:06.898760    2304 log.go:172] (0xc000a22370) (0xc000966000) Create stream\nI0222 14:45:06.898768    2304 log.go:172] (0xc000a22370) (0xc000966000) Stream added, broadcasting: 3\nI0222 14:45:06.899825    2304 log.go:172] (0xc000a22370) Reply frame received for 3\nI0222 14:45:06.899852    2304 log.go:172] (0xc000a22370) (0xc00069c280) Create stream\nI0222 14:45:06.899876    2304 log.go:172] (0xc000a22370) (0xc00069c280) Stream added, broadcasting: 5\nI0222 14:45:06.900801    2304 log.go:172] (0xc000a22370) Reply frame received for 5\nI0222 14:45:06.992635    2304 log.go:172] (0xc000a22370) Data frame received for 5\nI0222 14:45:06.992673    2304 log.go:172] (0xc00069c280) (5) Data frame handling\nI0222 14:45:06.992693    2304 log.go:172] (0xc00069c280) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0222 14:45:07.020197    2304 log.go:172] (0xc000a22370) Data frame received for 3\nI0222 14:45:07.020217    2304 log.go:172] (0xc000966000) (3) Data frame handling\nI0222 14:45:07.020236    2304 log.go:172] (0xc000966000) (3) Data frame sent\nI0222 14:45:07.100197    2304 log.go:172] (0xc000a22370) Data frame received for 1\nI0222 14:45:07.100299    2304 log.go:172] (0xc000a22370) (0xc000966000) Stream removed, broadcasting: 3\nI0222 14:45:07.100367    2304 log.go:172] (0xc000a385a0) (1) Data frame handling\nI0222 14:45:07.100392    2304 log.go:172] (0xc000a385a0) (1) Data frame sent\nI0222 14:45:07.100409    2304 log.go:172] (0xc000a22370) (0xc000a385a0) Stream removed, broadcasting: 1\nI0222 14:45:07.101357    2304 log.go:172] (0xc000a22370) (0xc00069c280) Stream removed, broadcasting: 5\nI0222 14:45:07.101447    2304 log.go:172] (0xc000a22370) (0xc000a385a0) Stream removed, broadcasting: 1\nI0222 14:45:07.101504    2304 log.go:172] (0xc000a22370) (0xc000966000) Stream removed, broadcasting: 3\nI0222 14:45:07.101518    2304 log.go:172] (0xc000a22370) (0xc00069c280) Stream removed, broadcasting: 5\n"
Feb 22 14:45:07.110: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 22 14:45:07.110: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 22 14:45:07.110: INFO: Waiting for statefulset status.replicas updated to 0
Feb 22 14:45:07.116: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1
Feb 22 14:45:17.136: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb 22 14:45:17.136: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Feb 22 14:45:17.136: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Feb 22 14:45:17.158: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb 22 14:45:17.158: INFO: ss-0  iruya-node                 Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-22 14:44:31 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-22 14:45:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-22 14:45:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-22 14:44:31 +0000 UTC  }]
Feb 22 14:45:17.158: INFO: ss-1  iruya-server-sfge57q7djm7  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-22 14:44:52 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-22 14:45:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-22 14:45:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-22 14:44:52 +0000 UTC  }]
Feb 22 14:45:17.158: INFO: ss-2  iruya-node                 Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-22 14:44:52 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-22 14:45:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-22 14:45:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-22 14:44:52 +0000 UTC  }]
Feb 22 14:45:17.158: INFO: 
Feb 22 14:45:17.158: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 22 14:45:19.098: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb 22 14:45:19.098: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-22 14:44:31 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-22 14:45:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-22 14:45:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-22 14:44:31 +0000 UTC  }]
Feb 22 14:45:19.098: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-22 14:44:52 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-22 14:45:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-22 14:45:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-22 14:44:52 +0000 UTC  }]
Feb 22 14:45:19.098: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-22 14:44:52 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-22 14:45:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-22 14:45:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-22 14:44:52 +0000 UTC  }]
Feb 22 14:45:19.098: INFO: 
Feb 22 14:45:19.098: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 22 14:45:20.110: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb 22 14:45:20.110: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-22 14:44:31 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-22 14:45:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-22 14:45:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-22 14:44:31 +0000 UTC  }]
Feb 22 14:45:20.110: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-22 14:44:52 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-22 14:45:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-22 14:45:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-22 14:44:52 +0000 UTC  }]
Feb 22 14:45:20.110: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-22 14:44:52 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-22 14:45:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-22 14:45:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-22 14:44:52 +0000 UTC  }]
Feb 22 14:45:20.110: INFO: 
Feb 22 14:45:20.110: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 22 14:45:21.118: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb 22 14:45:21.119: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-22 14:44:31 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-22 14:45:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-22 14:45:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-22 14:44:31 +0000 UTC  }]
Feb 22 14:45:21.119: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-22 14:44:52 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-22 14:45:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-22 14:45:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-22 14:44:52 +0000 UTC  }]
Feb 22 14:45:21.119: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-22 14:44:52 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-22 14:45:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-22 14:45:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-22 14:44:52 +0000 UTC  }]
Feb 22 14:45:21.119: INFO: 
Feb 22 14:45:21.119: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 22 14:45:22.184: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb 22 14:45:22.184: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-22 14:44:31 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-22 14:45:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-22 14:45:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-22 14:44:31 +0000 UTC  }]
Feb 22 14:45:22.184: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-22 14:44:52 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-22 14:45:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-22 14:45:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-22 14:44:52 +0000 UTC  }]
Feb 22 14:45:22.184: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-22 14:44:52 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-22 14:45:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-22 14:45:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-22 14:44:52 +0000 UTC  }]
Feb 22 14:45:22.184: INFO: 
Feb 22 14:45:22.184: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 22 14:45:23.196: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb 22 14:45:23.196: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-22 14:44:31 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-22 14:45:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-22 14:45:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-22 14:44:31 +0000 UTC  }]
Feb 22 14:45:23.197: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-22 14:44:52 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-22 14:45:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-22 14:45:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-22 14:44:52 +0000 UTC  }]
Feb 22 14:45:23.197: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-22 14:44:52 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-22 14:45:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-22 14:45:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-22 14:44:52 +0000 UTC  }]
Feb 22 14:45:23.197: INFO: 
Feb 22 14:45:23.197: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 22 14:45:24.207: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb 22 14:45:24.207: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-22 14:44:31 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-22 14:45:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-22 14:45:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-22 14:44:31 +0000 UTC  }]
Feb 22 14:45:24.207: INFO: ss-1  iruya-server-sfge57q7djm7  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-22 14:44:52 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-22 14:45:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-22 14:45:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-22 14:44:52 +0000 UTC  }]
Feb 22 14:45:24.207: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-22 14:44:52 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-22 14:45:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-22 14:45:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-22 14:44:52 +0000 UTC  }]
Feb 22 14:45:24.207: INFO: 
Feb 22 14:45:24.207: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 22 14:45:25.221: INFO: POD   NODE        PHASE    GRACE  CONDITIONS
Feb 22 14:45:25.221: INFO: ss-0  iruya-node  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-22 14:44:31 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-22 14:45:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-22 14:45:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-22 14:44:31 +0000 UTC  }]
Feb 22 14:45:25.221: INFO: ss-2  iruya-node  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-22 14:44:52 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-22 14:45:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-22 14:45:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-22 14:44:52 +0000 UTC  }]
Feb 22 14:45:25.221: INFO: 
Feb 22 14:45:25.221: INFO: StatefulSet ss has not reached scale 0, at 2
Feb 22 14:45:26.247: INFO: POD   NODE        PHASE    GRACE  CONDITIONS
Feb 22 14:45:26.247: INFO: ss-0  iruya-node  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-22 14:44:31 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-22 14:45:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-22 14:45:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-22 14:44:31 +0000 UTC  }]
Feb 22 14:45:26.247: INFO: ss-2  iruya-node  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-22 14:44:52 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-22 14:45:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-22 14:45:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-22 14:44:52 +0000 UTC  }]
Feb 22 14:45:26.247: INFO: 
Feb 22 14:45:26.247: INFO: StatefulSet ss has not reached scale 0, at 2
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-5711
Feb 22 14:45:27.258: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5711 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 22 14:45:27.508: INFO: rc: 1
Feb 22 14:45:27.509: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5711 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    error: unable to upgrade connection: container not found ("nginx")
 []  0xc00327c720 exit status 1   true [0xc0000117d0 0xc000011c88 0xc000011e10] [0xc0000117d0 0xc000011c88 0xc000011e10] [0xc000011bc0 0xc000011d20] [0xba6c50 0xba6c50] 0xc001d1be00 }:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("nginx")

error:
exit status 1
Feb 22 14:45:37.510: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5711 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 22 14:45:37.674: INFO: rc: 1
Feb 22 14:45:37.675: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5711 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002363230 exit status 1   true [0xc001cc0020 0xc001cc0038 0xc001cc0050] [0xc001cc0020 0xc001cc0038 0xc001cc0050] [0xc001cc0030 0xc001cc0048] [0xba6c50 0xba6c50] 0xc001e6c120 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 22 14:45:47.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5711 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 22 14:45:47.895: INFO: rc: 1
Feb 22 14:45:47.896: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5711 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0023632f0 exit status 1   true [0xc001cc0058 0xc001cc0070 0xc001cc0088] [0xc001cc0058 0xc001cc0070 0xc001cc0088] [0xc001cc0068 0xc001cc0080] [0xba6c50 0xba6c50] 0xc001e6c900 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 22 14:45:57.897: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5711 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 22 14:45:58.044: INFO: rc: 1
Feb 22 14:45:58.044: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5711 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001a5b740 exit status 1   true [0xc0009d2a08 0xc0009d2ac0 0xc0009d2c00] [0xc0009d2a08 0xc0009d2ac0 0xc0009d2c00] [0xc0009d2a80 0xc0009d2ba8] [0xba6c50 0xba6c50] 0xc001df8900 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 22 14:46:08.045: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5711 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 22 14:46:08.233: INFO: rc: 1
Feb 22 14:46:08.233: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5711 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001a5b830 exit status 1   true [0xc0009d2c50 0xc0009d2db8 0xc0009d2eb8] [0xc0009d2c50 0xc0009d2db8 0xc0009d2eb8] [0xc0009d2d60 0xc0009d2e58] [0xba6c50 0xba6c50] 0xc001468a20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 22 14:46:18.234: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5711 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 22 14:46:18.475: INFO: rc: 1
Feb 22 14:46:18.476: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5711 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001a5b8f0 exit status 1   true [0xc0009d2f18 0xc0009d2fc8 0xc0009d3058] [0xc0009d2f18 0xc0009d2fc8 0xc0009d3058] [0xc0009d2fa0 0xc0009d2ff0] [0xba6c50 0xba6c50] 0xc001d9ade0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 22 14:46:28.477: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5711 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 22 14:46:28.629: INFO: rc: 1
Feb 22 14:46:28.629: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5711 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001a5b9b0 exit status 1   true [0xc0009d30d8 0xc0009d3270 0xc0009d32d0] [0xc0009d30d8 0xc0009d3270 0xc0009d32d0] [0xc0009d31b0 0xc0009d32b0] [0xba6c50 0xba6c50] 0xc001d9bec0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 22 14:46:38.630: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5711 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 22 14:46:38.828: INFO: rc: 1
Feb 22 14:46:38.828: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5711 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001f02ea0 exit status 1   true [0xc001a06168 0xc001a061c8 0xc001a06258] [0xc001a06168 0xc001a061c8 0xc001a06258] [0xc001a061b0 0xc001a061d8] [0xba6c50 0xba6c50] 0xc0022a9980 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 22 14:46:48.829: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5711 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 22 14:46:48.974: INFO: rc: 1
Feb 22 14:46:48.974: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5711 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0023633b0 exit status 1   true [0xc001cc0090 0xc001cc00a8 0xc001cc00c0] [0xc001cc0090 0xc001cc00a8 0xc001cc00c0] [0xc001cc00a0 0xc001cc00b8] [0xba6c50 0xba6c50] 0xc001e6d620 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 22 14:46:58.975: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5711 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 22 14:46:59.150: INFO: rc: 1
Feb 22 14:46:59.151: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5711 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001f02f90 exit status 1   true [0xc001a062c0 0xc001a06370 0xc001a06438] [0xc001a062c0 0xc001a06370 0xc001a06438] [0xc001a06360 0xc001a063c8] [0xba6c50 0xba6c50] 0xc0021cc720 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 22 14:47:09.151: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5711 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 22 14:47:09.334: INFO: rc: 1
Feb 22 14:47:09.334: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5711 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001f03050 exit status 1   true [0xc001a06498 0xc001a065a0 0xc001a06648] [0xc001a06498 0xc001a065a0 0xc001a06648] [0xc001a06578 0xc001a065c8] [0xba6c50 0xba6c50] 0xc0021cce40 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 22 14:47:19.348: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5711 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 22 14:47:19.493: INFO: rc: 1
Feb 22 14:47:19.493: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5711 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001f02000 exit status 1   true [0xc002516008 0xc002516020 0xc002516038] [0xc002516008 0xc002516020 0xc002516038] [0xc002516018 0xc002516030] [0xba6c50 0xba6c50] 0xc0021cc1e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 22 14:47:29.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5711 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 22 14:47:29.722: INFO: rc: 1
Feb 22 14:47:29.723: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5711 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001f020f0 exit status 1   true [0xc0003bc338 0xc000010f30 0xc0000111a8] [0xc0003bc338 0xc000010f30 0xc0000111a8] [0xc000010f08 0xc0000110c0] [0xba6c50 0xba6c50] 0xc0021ccb40 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 22 14:47:39.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5711 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 22 14:47:39.949: INFO: rc: 1
Feb 22 14:47:39.949: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5711 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001f021b0 exit status 1   true [0xc0000111e0 0xc0000113b8 0xc000011538] [0xc0000111e0 0xc0000113b8 0xc000011538] [0xc000011340 0xc0000114b0] [0xba6c50 0xba6c50] 0xc0021cd0e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 22 14:47:49.950: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5711 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 22 14:47:50.112: INFO: rc: 1
Feb 22 14:47:50.112: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5711 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001f022a0 exit status 1   true [0xc000011548 0xc000011578 0xc0000115b0] [0xc000011548 0xc000011578 0xc0000115b0] [0xc000011560 0xc000011598] [0xba6c50 0xba6c50] 0xc0021cdaa0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 22 14:48:00.113: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5711 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 22 14:48:00.282: INFO: rc: 1
Feb 22 14:48:00.282: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5711 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001f02390 exit status 1   true [0xc0000115e0 0xc000011688 0xc0000117d0] [0xc0000115e0 0xc000011688 0xc0000117d0] [0xc000011650 0xc0000117a8] [0xba6c50 0xba6c50] 0xc0022a93e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 22 14:48:10.283: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5711 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 22 14:48:10.522: INFO: rc: 1
Feb 22 14:48:10.522: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5711 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001a1b9b0 exit status 1   true [0xc001cc0000 0xc001cc0018 0xc001cc0030] [0xc001cc0000 0xc001cc0018 0xc001cc0030] [0xc001cc0010 0xc001cc0028] [0xba6c50 0xba6c50] 0xc001d9acc0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 22 14:48:20.524: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5711 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 22 14:48:20.653: INFO: rc: 1
Feb 22 14:48:20.654: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5711 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001a1baa0 exit status 1   true [0xc001cc0038 0xc001cc0050 0xc001cc0068] [0xc001cc0038 0xc001cc0050 0xc001cc0068] [0xc001cc0048 0xc001cc0060] [0xba6c50 0xba6c50] 0xc001d9bd40 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 22 14:48:30.655: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5711 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 22 14:48:30.769: INFO: rc: 1
Feb 22 14:48:30.769: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5711 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001f02600 exit status 1   true [0xc000011b48 0xc000011cc8 0xc000011ea0] [0xc000011b48 0xc000011cc8 0xc000011ea0] [0xc000011c88 0xc000011e10] [0xba6c50 0xba6c50] 0xc001468a20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 22 14:48:40.769: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5711 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 22 14:48:40.974: INFO: rc: 1
Feb 22 14:48:40.974: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5711 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001a1bb90 exit status 1   true [0xc001cc0070 0xc001cc0088 0xc001cc00a0] [0xc001cc0070 0xc001cc0088 0xc001cc00a0] [0xc001cc0080 0xc001cc0098] [0xba6c50 0xba6c50] 0xc001e6c120 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 22 14:48:50.974: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5711 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 22 14:48:51.103: INFO: rc: 1
Feb 22 14:48:51.104: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5711 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001a1bc50 exit status 1   true [0xc001cc00a8 0xc001cc00c0 0xc001cc00d8] [0xc001cc00a8 0xc001cc00c0 0xc001cc00d8] [0xc001cc00b8 0xc001cc00d0] [0xba6c50 0xba6c50] 0xc001e6c900 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 22 14:49:01.104: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5711 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 22 14:49:01.293: INFO: rc: 1
Feb 22 14:49:01.294: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5711 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001a1bd10 exit status 1   true [0xc001cc00e0 0xc001cc00f8 0xc001cc0110] [0xc001cc00e0 0xc001cc00f8 0xc001cc0110] [0xc001cc00f0 0xc001cc0108] [0xba6c50 0xba6c50] 0xc001e6d620 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 22 14:49:11.295: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5711 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 22 14:49:11.468: INFO: rc: 1
Feb 22 14:49:11.468: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5711 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00256e0f0 exit status 1   true [0xc001a06000 0xc001a06028 0xc001a060a8] [0xc001a06000 0xc001a06028 0xc001a060a8] [0xc001a06010 0xc001a06098] [0xba6c50 0xba6c50] 0xc001c84000 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 22 14:49:21.469: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5711 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 22 14:49:21.643: INFO: rc: 1
Feb 22 14:49:21.643: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5711 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00256e090 exit status 1   true [0xc001a06000 0xc001a06028 0xc001a060a8] [0xc001a06000 0xc001a06028 0xc001a060a8] [0xc001a06010 0xc001a06098] [0xba6c50 0xba6c50] 0xc0021cc960 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 22 14:49:31.644: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5711 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 22 14:49:31.810: INFO: rc: 1
Feb 22 14:49:31.810: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5711 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001f020c0 exit status 1   true [0xc001cc0000 0xc001cc0018 0xc001cc0030] [0xc001cc0000 0xc001cc0018 0xc001cc0030] [0xc001cc0010 0xc001cc0028] [0xba6c50 0xba6c50] 0xc001e6c060 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 22 14:49:41.811: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5711 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 22 14:49:42.016: INFO: rc: 1
Feb 22 14:49:42.016: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5711 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00327c0f0 exit status 1   true [0xc000010010 0xc000011038 0xc0000111e0] [0xc000010010 0xc000011038 0xc0000111e0] [0xc000010f30 0xc0000111a8] [0xba6c50 0xba6c50] 0xc001468a20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 22 14:49:52.017: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5711 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 22 14:49:52.169: INFO: rc: 1
Feb 22 14:49:52.170: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5711 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00327c1e0 exit status 1   true [0xc000011298 0xc000011448 0xc000011548] [0xc000011298 0xc000011448 0xc000011548] [0xc0000113b8 0xc000011538] [0xba6c50 0xba6c50] 0xc001d9ade0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 22 14:50:02.171: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5711 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 22 14:50:02.310: INFO: rc: 1
Feb 22 14:50:02.310: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5711 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00256e1b0 exit status 1   true [0xc001a060b8 0xc001a060e8 0xc001a06140] [0xc001a060b8 0xc001a060e8 0xc001a06140] [0xc001a060d8 0xc001a06120] [0xba6c50 0xba6c50] 0xc0021ccf60 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 22 14:50:12.311: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5711 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 22 14:50:12.533: INFO: rc: 1
Feb 22 14:50:12.534: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5711 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00256e270 exit status 1   true [0xc001a06158 0xc001a061b0 0xc001a061d8] [0xc001a06158 0xc001a061b0 0xc001a061d8] [0xc001a06180 0xc001a061d0] [0xba6c50 0xba6c50] 0xc0021cd920 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 22 14:50:22.535: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5711 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 22 14:50:22.681: INFO: rc: 1
Feb 22 14:50:22.682: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5711 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001a1ba10 exit status 1   true [0xc0009d2028 0xc0009d2098 0xc0009d21a8] [0xc0009d2028 0xc0009d2098 0xc0009d21a8] [0xc0009d2088 0xc0009d2170] [0xba6c50 0xba6c50] 0xc0022a9980 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb 22 14:50:32.682: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5711 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 22 14:50:32.847: INFO: rc: 1
Feb 22 14:50:32.847: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: 
Feb 22 14:50:32.847: INFO: Scaling statefulset ss to 0
Feb 22 14:50:32.877: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Feb 22 14:50:32.880: INFO: Deleting all statefulset in ns statefulset-5711
Feb 22 14:50:32.884: INFO: Scaling statefulset ss to 0
Feb 22 14:50:32.896: INFO: Waiting for statefulset status.replicas updated to 0
Feb 22 14:50:32.898: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 14:50:32.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-5711" for this suite.
Feb 22 14:50:40.996: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 14:50:41.110: INFO: namespace statefulset-5711 deletion completed in 8.140397717s

• [SLOW TEST:369.412 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Burst scaling should run to completion even with unhealthy pods [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 14:50:41.111: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod test-webserver-090971c3-8238-4ebe-8dca-b44f22ce81d0 in namespace container-probe-8641
Feb 22 14:50:51.261: INFO: Started pod test-webserver-090971c3-8238-4ebe-8dca-b44f22ce81d0 in namespace container-probe-8641
STEP: checking the pod's current state and verifying that restartCount is present
Feb 22 14:50:51.265: INFO: Initial restart count of pod test-webserver-090971c3-8238-4ebe-8dca-b44f22ce81d0 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 14:54:51.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-8641" for this suite.
Feb 22 14:54:57.674: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 14:54:57.885: INFO: namespace container-probe-8641 deletion completed in 6.343836512s

• [SLOW TEST:256.775 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 14:54:57.887: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Feb 22 14:54:58.021: INFO: Waiting up to 5m0s for pod "downward-api-c150e250-3bbd-4b90-9c50-73ccc361d0a9" in namespace "downward-api-2368" to be "success or failure"
Feb 22 14:54:58.042: INFO: Pod "downward-api-c150e250-3bbd-4b90-9c50-73ccc361d0a9": Phase="Pending", Reason="", readiness=false. Elapsed: 19.757551ms
Feb 22 14:55:00.048: INFO: Pod "downward-api-c150e250-3bbd-4b90-9c50-73ccc361d0a9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025942661s
Feb 22 14:55:02.293: INFO: Pod "downward-api-c150e250-3bbd-4b90-9c50-73ccc361d0a9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.271163993s
Feb 22 14:55:04.316: INFO: Pod "downward-api-c150e250-3bbd-4b90-9c50-73ccc361d0a9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.29458271s
Feb 22 14:55:06.326: INFO: Pod "downward-api-c150e250-3bbd-4b90-9c50-73ccc361d0a9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.303777784s
Feb 22 14:55:08.343: INFO: Pod "downward-api-c150e250-3bbd-4b90-9c50-73ccc361d0a9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.321273781s
STEP: Saw pod success
Feb 22 14:55:08.344: INFO: Pod "downward-api-c150e250-3bbd-4b90-9c50-73ccc361d0a9" satisfied condition "success or failure"
Feb 22 14:55:08.359: INFO: Trying to get logs from node iruya-node pod downward-api-c150e250-3bbd-4b90-9c50-73ccc361d0a9 container dapi-container: 
STEP: delete the pod
Feb 22 14:55:08.482: INFO: Waiting for pod downward-api-c150e250-3bbd-4b90-9c50-73ccc361d0a9 to disappear
Feb 22 14:55:08.496: INFO: Pod downward-api-c150e250-3bbd-4b90-9c50-73ccc361d0a9 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 14:55:08.497: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2368" for this suite.
Feb 22 14:55:14.615: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 14:55:14.783: INFO: namespace downward-api-2368 deletion completed in 6.245691976s

• [SLOW TEST:16.897 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 14:55:14.787: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override all
Feb 22 14:55:14.930: INFO: Waiting up to 5m0s for pod "client-containers-f455422c-aec5-4a5f-b408-e2216864c419" in namespace "containers-2509" to be "success or failure"
Feb 22 14:55:14.969: INFO: Pod "client-containers-f455422c-aec5-4a5f-b408-e2216864c419": Phase="Pending", Reason="", readiness=false. Elapsed: 38.606312ms
Feb 22 14:55:16.984: INFO: Pod "client-containers-f455422c-aec5-4a5f-b408-e2216864c419": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053683251s
Feb 22 14:55:18.994: INFO: Pod "client-containers-f455422c-aec5-4a5f-b408-e2216864c419": Phase="Pending", Reason="", readiness=false. Elapsed: 4.063069982s
Feb 22 14:55:21.001: INFO: Pod "client-containers-f455422c-aec5-4a5f-b408-e2216864c419": Phase="Pending", Reason="", readiness=false. Elapsed: 6.070059976s
Feb 22 14:55:23.079: INFO: Pod "client-containers-f455422c-aec5-4a5f-b408-e2216864c419": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.148468672s
STEP: Saw pod success
Feb 22 14:55:23.079: INFO: Pod "client-containers-f455422c-aec5-4a5f-b408-e2216864c419" satisfied condition "success or failure"
Feb 22 14:55:23.087: INFO: Trying to get logs from node iruya-node pod client-containers-f455422c-aec5-4a5f-b408-e2216864c419 container test-container: 
STEP: delete the pod
Feb 22 14:55:23.166: INFO: Waiting for pod client-containers-f455422c-aec5-4a5f-b408-e2216864c419 to disappear
Feb 22 14:55:23.172: INFO: Pod client-containers-f455422c-aec5-4a5f-b408-e2216864c419 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 14:55:23.172: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-2509" for this suite.
Feb 22 14:55:29.192: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 14:55:29.337: INFO: namespace containers-2509 deletion completed in 6.159766863s

• [SLOW TEST:14.550 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 14:55:29.339: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Feb 22 14:55:38.600: INFO: Expected: &{OK} to match Container's Termination Message: OK --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 14:55:38.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-6798" for this suite.
Feb 22 14:55:44.687: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 14:55:44.796: INFO: namespace container-runtime-6798 deletion completed in 6.137351216s

• [SLOW TEST:15.457 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 14:55:44.797: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:179
[It] should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 14:55:44.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-9522" for this suite.
Feb 22 14:56:07.073: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 14:56:07.206: INFO: namespace pods-9522 deletion completed in 22.164887836s

• [SLOW TEST:22.409 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be submitted and removed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 14:56:07.206: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-c2e3fd23-6d8f-440f-99b3-d681d7a440cb
STEP: Creating a pod to test consume configMaps
Feb 22 14:56:07.313: INFO: Waiting up to 5m0s for pod "pod-configmaps-c6de3a96-af2e-47f3-8615-5f9e911bc561" in namespace "configmap-1682" to be "success or failure"
Feb 22 14:56:07.321: INFO: Pod "pod-configmaps-c6de3a96-af2e-47f3-8615-5f9e911bc561": Phase="Pending", Reason="", readiness=false. Elapsed: 7.29799ms
Feb 22 14:56:09.329: INFO: Pod "pod-configmaps-c6de3a96-af2e-47f3-8615-5f9e911bc561": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015641846s
Feb 22 14:56:11.340: INFO: Pod "pod-configmaps-c6de3a96-af2e-47f3-8615-5f9e911bc561": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026306371s
Feb 22 14:56:13.351: INFO: Pod "pod-configmaps-c6de3a96-af2e-47f3-8615-5f9e911bc561": Phase="Pending", Reason="", readiness=false. Elapsed: 6.037277723s
Feb 22 14:56:15.933: INFO: Pod "pod-configmaps-c6de3a96-af2e-47f3-8615-5f9e911bc561": Phase="Pending", Reason="", readiness=false. Elapsed: 8.619508154s
Feb 22 14:56:17.945: INFO: Pod "pod-configmaps-c6de3a96-af2e-47f3-8615-5f9e911bc561": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.631955701s
STEP: Saw pod success
Feb 22 14:56:17.946: INFO: Pod "pod-configmaps-c6de3a96-af2e-47f3-8615-5f9e911bc561" satisfied condition "success or failure"
Feb 22 14:56:17.950: INFO: Trying to get logs from node iruya-node pod pod-configmaps-c6de3a96-af2e-47f3-8615-5f9e911bc561 container configmap-volume-test: 
STEP: delete the pod
Feb 22 14:56:18.050: INFO: Waiting for pod pod-configmaps-c6de3a96-af2e-47f3-8615-5f9e911bc561 to disappear
Feb 22 14:56:18.062: INFO: Pod pod-configmaps-c6de3a96-af2e-47f3-8615-5f9e911bc561 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 14:56:18.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1682" for this suite.
Feb 22 14:56:24.099: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 14:56:24.197: INFO: namespace configmap-1682 deletion completed in 6.124844247s

• [SLOW TEST:16.991 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 14:56:24.198: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Feb 22 14:56:32.976: INFO: Successfully updated pod "annotationupdate6ed2ec68-f206-463f-9424-142ea392d30f"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 14:56:35.071: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-555" for this suite.
Feb 22 14:56:57.103: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 14:56:57.253: INFO: namespace projected-555 deletion completed in 22.176786186s

• [SLOW TEST:33.056 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run rc 
  should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 14:56:57.254: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1456
[It] should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb 22 14:56:57.298: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-9483'
Feb 22 14:56:59.448: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb 22 14:56:59.448: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created
STEP: confirm that you can get logs from an rc
Feb 22 14:56:59.528: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-48tt9]
Feb 22 14:56:59.528: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-48tt9" in namespace "kubectl-9483" to be "running and ready"
Feb 22 14:56:59.559: INFO: Pod "e2e-test-nginx-rc-48tt9": Phase="Pending", Reason="", readiness=false. Elapsed: 31.028943ms
Feb 22 14:57:01.564: INFO: Pod "e2e-test-nginx-rc-48tt9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036293181s
Feb 22 14:57:03.582: INFO: Pod "e2e-test-nginx-rc-48tt9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053736918s
Feb 22 14:57:05.590: INFO: Pod "e2e-test-nginx-rc-48tt9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.061915847s
Feb 22 14:57:07.598: INFO: Pod "e2e-test-nginx-rc-48tt9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.069804787s
Feb 22 14:57:09.608: INFO: Pod "e2e-test-nginx-rc-48tt9": Phase="Running", Reason="", readiness=true. Elapsed: 10.080321988s
Feb 22 14:57:09.609: INFO: Pod "e2e-test-nginx-rc-48tt9" satisfied condition "running and ready"
Feb 22 14:57:09.609: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-48tt9]
Feb 22 14:57:09.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=kubectl-9483'
Feb 22 14:57:09.853: INFO: stderr: ""
Feb 22 14:57:09.853: INFO: stdout: ""
[AfterEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1461
Feb 22 14:57:09.854: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-9483'
Feb 22 14:57:10.011: INFO: stderr: ""
Feb 22 14:57:10.012: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 14:57:10.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9483" for this suite.
Feb 22 14:57:32.067: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 14:57:32.216: INFO: namespace kubectl-9483 deletion completed in 22.194459803s

• [SLOW TEST:34.962 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create an rc from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 14:57:32.217: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override command
Feb 22 14:57:32.288: INFO: Waiting up to 5m0s for pod "client-containers-1b173e6a-f06e-4e4e-a550-c59bcf5b238c" in namespace "containers-9937" to be "success or failure"
Feb 22 14:57:32.297: INFO: Pod "client-containers-1b173e6a-f06e-4e4e-a550-c59bcf5b238c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.855851ms
Feb 22 14:57:34.306: INFO: Pod "client-containers-1b173e6a-f06e-4e4e-a550-c59bcf5b238c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017862162s
Feb 22 14:57:36.313: INFO: Pod "client-containers-1b173e6a-f06e-4e4e-a550-c59bcf5b238c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024975396s
Feb 22 14:57:38.322: INFO: Pod "client-containers-1b173e6a-f06e-4e4e-a550-c59bcf5b238c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.034142085s
Feb 22 14:57:40.333: INFO: Pod "client-containers-1b173e6a-f06e-4e4e-a550-c59bcf5b238c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.044621139s
STEP: Saw pod success
Feb 22 14:57:40.333: INFO: Pod "client-containers-1b173e6a-f06e-4e4e-a550-c59bcf5b238c" satisfied condition "success or failure"
Feb 22 14:57:40.337: INFO: Trying to get logs from node iruya-node pod client-containers-1b173e6a-f06e-4e4e-a550-c59bcf5b238c container test-container: 
STEP: delete the pod
Feb 22 14:57:40.419: INFO: Waiting for pod client-containers-1b173e6a-f06e-4e4e-a550-c59bcf5b238c to disappear
Feb 22 14:57:40.426: INFO: Pod client-containers-1b173e6a-f06e-4e4e-a550-c59bcf5b238c no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 14:57:40.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-9937" for this suite.
Feb 22 14:57:46.457: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 14:57:46.590: INFO: namespace containers-9937 deletion completed in 6.156909551s

• [SLOW TEST:14.373 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 14:57:46.590: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs
STEP: Gathering metrics
W0222 14:58:17.243054       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 22 14:58:17.243: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 14:58:17.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-9391" for this suite.
Feb 22 14:58:23.269: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 14:58:23.346: INFO: namespace gc-9391 deletion completed in 6.097299366s

• [SLOW TEST:36.757 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 14:58:23.347: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 14:59:25.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-6858" for this suite.
Feb 22 14:59:51.308: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 14:59:51.477: INFO: namespace container-probe-6858 deletion completed in 26.228459923s

• [SLOW TEST:88.130 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 14:59:51.478: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-configmap-rp85
STEP: Creating a pod to test atomic-volume-subpath
Feb 22 14:59:51.762: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-rp85" in namespace "subpath-9822" to be "success or failure"
Feb 22 14:59:51.787: INFO: Pod "pod-subpath-test-configmap-rp85": Phase="Pending", Reason="", readiness=false. Elapsed: 24.991289ms
Feb 22 14:59:53.802: INFO: Pod "pod-subpath-test-configmap-rp85": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040593195s
Feb 22 14:59:55.817: INFO: Pod "pod-subpath-test-configmap-rp85": Phase="Pending", Reason="", readiness=false. Elapsed: 4.055319853s
Feb 22 14:59:57.830: INFO: Pod "pod-subpath-test-configmap-rp85": Phase="Pending", Reason="", readiness=false. Elapsed: 6.068218405s
Feb 22 14:59:59.841: INFO: Pod "pod-subpath-test-configmap-rp85": Phase="Pending", Reason="", readiness=false. Elapsed: 8.079263354s
Feb 22 15:00:01.853: INFO: Pod "pod-subpath-test-configmap-rp85": Phase="Running", Reason="", readiness=true. Elapsed: 10.090912685s
Feb 22 15:00:03.878: INFO: Pod "pod-subpath-test-configmap-rp85": Phase="Running", Reason="", readiness=true. Elapsed: 12.116612814s
Feb 22 15:00:05.890: INFO: Pod "pod-subpath-test-configmap-rp85": Phase="Running", Reason="", readiness=true. Elapsed: 14.128303774s
Feb 22 15:00:07.902: INFO: Pod "pod-subpath-test-configmap-rp85": Phase="Running", Reason="", readiness=true. Elapsed: 16.140623488s
Feb 22 15:00:09.914: INFO: Pod "pod-subpath-test-configmap-rp85": Phase="Running", Reason="", readiness=true. Elapsed: 18.152714033s
Feb 22 15:00:11.931: INFO: Pod "pod-subpath-test-configmap-rp85": Phase="Running", Reason="", readiness=true. Elapsed: 20.169426273s
Feb 22 15:00:13.947: INFO: Pod "pod-subpath-test-configmap-rp85": Phase="Running", Reason="", readiness=true. Elapsed: 22.184841265s
Feb 22 15:00:15.957: INFO: Pod "pod-subpath-test-configmap-rp85": Phase="Running", Reason="", readiness=true. Elapsed: 24.194797994s
Feb 22 15:00:17.969: INFO: Pod "pod-subpath-test-configmap-rp85": Phase="Running", Reason="", readiness=true. Elapsed: 26.207167236s
Feb 22 15:00:19.985: INFO: Pod "pod-subpath-test-configmap-rp85": Phase="Running", Reason="", readiness=true. Elapsed: 28.223272189s
Feb 22 15:00:21.995: INFO: Pod "pod-subpath-test-configmap-rp85": Phase="Running", Reason="", readiness=true. Elapsed: 30.233668561s
Feb 22 15:00:24.008: INFO: Pod "pod-subpath-test-configmap-rp85": Phase="Succeeded", Reason="", readiness=false. Elapsed: 32.245965937s
STEP: Saw pod success
Feb 22 15:00:24.008: INFO: Pod "pod-subpath-test-configmap-rp85" satisfied condition "success or failure"
Feb 22 15:00:24.012: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-configmap-rp85 container test-container-subpath-configmap-rp85: 
STEP: delete the pod
Feb 22 15:00:24.103: INFO: Waiting for pod pod-subpath-test-configmap-rp85 to disappear
Feb 22 15:00:24.115: INFO: Pod pod-subpath-test-configmap-rp85 no longer exists
STEP: Deleting pod pod-subpath-test-configmap-rp85
Feb 22 15:00:24.115: INFO: Deleting pod "pod-subpath-test-configmap-rp85" in namespace "subpath-9822"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 15:00:24.124: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-9822" for this suite.
Feb 22 15:00:30.168: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 15:00:30.306: INFO: namespace subpath-9822 deletion completed in 6.171186303s

• [SLOW TEST:38.829 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 15:00:30.307: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-5386
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb 22 15:00:30.384: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Feb 22 15:01:04.643: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=udp&host=10.44.0.1&port=8081&tries=1'] Namespace:pod-network-test-5386 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 22 15:01:04.643: INFO: >>> kubeConfig: /root/.kube/config
I0222 15:01:04.741310       8 log.go:172] (0xc0004454a0) (0xc002182640) Create stream
I0222 15:01:04.741381       8 log.go:172] (0xc0004454a0) (0xc002182640) Stream added, broadcasting: 1
I0222 15:01:04.750177       8 log.go:172] (0xc0004454a0) Reply frame received for 1
I0222 15:01:04.750206       8 log.go:172] (0xc0004454a0) (0xc001baa460) Create stream
I0222 15:01:04.750212       8 log.go:172] (0xc0004454a0) (0xc001baa460) Stream added, broadcasting: 3
I0222 15:01:04.751859       8 log.go:172] (0xc0004454a0) Reply frame received for 3
I0222 15:01:04.751895       8 log.go:172] (0xc0004454a0) (0xc0021826e0) Create stream
I0222 15:01:04.751905       8 log.go:172] (0xc0004454a0) (0xc0021826e0) Stream added, broadcasting: 5
I0222 15:01:04.753505       8 log.go:172] (0xc0004454a0) Reply frame received for 5
I0222 15:01:05.024786       8 log.go:172] (0xc0004454a0) Data frame received for 3
I0222 15:01:05.024910       8 log.go:172] (0xc001baa460) (3) Data frame handling
I0222 15:01:05.024946       8 log.go:172] (0xc001baa460) (3) Data frame sent
I0222 15:01:05.171438       8 log.go:172] (0xc0004454a0) Data frame received for 1
I0222 15:01:05.171642       8 log.go:172] (0xc002182640) (1) Data frame handling
I0222 15:01:05.171691       8 log.go:172] (0xc002182640) (1) Data frame sent
I0222 15:01:05.171755       8 log.go:172] (0xc0004454a0) (0xc002182640) Stream removed, broadcasting: 1
I0222 15:01:05.172673       8 log.go:172] (0xc0004454a0) (0xc0021826e0) Stream removed, broadcasting: 5
I0222 15:01:05.172766       8 log.go:172] (0xc0004454a0) (0xc001baa460) Stream removed, broadcasting: 3
I0222 15:01:05.172878       8 log.go:172] (0xc0004454a0) (0xc002182640) Stream removed, broadcasting: 1
I0222 15:01:05.172898       8 log.go:172] (0xc0004454a0) (0xc001baa460) Stream removed, broadcasting: 3
I0222 15:01:05.172913       8 log.go:172] (0xc0004454a0) (0xc0021826e0) Stream removed, broadcasting: 5
I0222 15:01:05.173662       8 log.go:172] (0xc0004454a0) Go away received
Feb 22 15:01:05.174: INFO: Waiting for endpoints: map[]
Feb 22 15:01:05.187: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:pod-network-test-5386 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 22 15:01:05.187: INFO: >>> kubeConfig: /root/.kube/config
I0222 15:01:05.300317       8 log.go:172] (0xc000e32c60) (0xc001baa960) Create stream
I0222 15:01:05.300507       8 log.go:172] (0xc000e32c60) (0xc001baa960) Stream added, broadcasting: 1
I0222 15:01:05.305821       8 log.go:172] (0xc000e32c60) Reply frame received for 1
I0222 15:01:05.305866       8 log.go:172] (0xc000e32c60) (0xc00168c000) Create stream
I0222 15:01:05.305878       8 log.go:172] (0xc000e32c60) (0xc00168c000) Stream added, broadcasting: 3
I0222 15:01:05.307419       8 log.go:172] (0xc000e32c60) Reply frame received for 3
I0222 15:01:05.307491       8 log.go:172] (0xc000e32c60) (0xc00168c0a0) Create stream
I0222 15:01:05.307517       8 log.go:172] (0xc000e32c60) (0xc00168c0a0) Stream added, broadcasting: 5
I0222 15:01:05.308707       8 log.go:172] (0xc000e32c60) Reply frame received for 5
I0222 15:01:05.420442       8 log.go:172] (0xc000e32c60) Data frame received for 3
I0222 15:01:05.420554       8 log.go:172] (0xc00168c000) (3) Data frame handling
I0222 15:01:05.420577       8 log.go:172] (0xc00168c000) (3) Data frame sent
I0222 15:01:05.588995       8 log.go:172] (0xc000e32c60) Data frame received for 1
I0222 15:01:05.589144       8 log.go:172] (0xc000e32c60) (0xc00168c0a0) Stream removed, broadcasting: 5
I0222 15:01:05.589208       8 log.go:172] (0xc001baa960) (1) Data frame handling
I0222 15:01:05.589224       8 log.go:172] (0xc001baa960) (1) Data frame sent
I0222 15:01:05.589573       8 log.go:172] (0xc000e32c60) (0xc001baa960) Stream removed, broadcasting: 1
I0222 15:01:05.589789       8 log.go:172] (0xc000e32c60) (0xc00168c000) Stream removed, broadcasting: 3
I0222 15:01:05.589837       8 log.go:172] (0xc000e32c60) Go away received
I0222 15:01:05.590282       8 log.go:172] (0xc000e32c60) (0xc001baa960) Stream removed, broadcasting: 1
I0222 15:01:05.590322       8 log.go:172] (0xc000e32c60) (0xc00168c000) Stream removed, broadcasting: 3
I0222 15:01:05.590354       8 log.go:172] (0xc000e32c60) (0xc00168c0a0) Stream removed, broadcasting: 5
Feb 22 15:01:05.590: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 15:01:05.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-5386" for this suite.
Feb 22 15:01:29.622: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 15:01:29.787: INFO: namespace pod-network-test-5386 deletion completed in 24.188870288s

• [SLOW TEST:59.480 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 15:01:29.789: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating service multi-endpoint-test in namespace services-5269
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5269 to expose endpoints map[]
Feb 22 15:01:29.977: INFO: Get endpoints failed (6.34742ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found
Feb 22 15:01:30.987: INFO: successfully validated that service multi-endpoint-test in namespace services-5269 exposes endpoints map[] (1.016463093s elapsed)
STEP: Creating pod pod1 in namespace services-5269
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5269 to expose endpoints map[pod1:[100]]
Feb 22 15:01:35.110: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.108996492s elapsed, will retry)
Feb 22 15:01:40.474: INFO: successfully validated that service multi-endpoint-test in namespace services-5269 exposes endpoints map[pod1:[100]] (9.472489172s elapsed)
STEP: Creating pod pod2 in namespace services-5269
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5269 to expose endpoints map[pod1:[100] pod2:[101]]
Feb 22 15:01:45.824: INFO: Unexpected endpoints: found map[62d78df6-1403-401e-b43f-a47e2d43f49f:[100]], expected map[pod1:[100] pod2:[101]] (5.341443421s elapsed, will retry)
Feb 22 15:01:49.047: INFO: successfully validated that service multi-endpoint-test in namespace services-5269 exposes endpoints map[pod1:[100] pod2:[101]] (8.564791909s elapsed)
STEP: Deleting pod pod1 in namespace services-5269
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5269 to expose endpoints map[pod2:[101]]
Feb 22 15:01:49.100: INFO: successfully validated that service multi-endpoint-test in namespace services-5269 exposes endpoints map[pod2:[101]] (34.702455ms elapsed)
STEP: Deleting pod pod2 in namespace services-5269
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5269 to expose endpoints map[]
Feb 22 15:01:50.185: INFO: successfully validated that service multi-endpoint-test in namespace services-5269 exposes endpoints map[] (1.029688871s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 15:01:50.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-5269" for this suite.
Feb 22 15:02:12.370: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 15:02:12.493: INFO: namespace services-5269 deletion completed in 22.193949788s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:42.704 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 15:02:12.494: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-0c41ddc6-b14e-40f7-8201-cfa78c122ada
STEP: Creating a pod to test consume configMaps
Feb 22 15:02:12.767: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-c5853e74-2bbe-44c2-aefd-843ae50130ef" in namespace "projected-5433" to be "success or failure"
Feb 22 15:02:12.775: INFO: Pod "pod-projected-configmaps-c5853e74-2bbe-44c2-aefd-843ae50130ef": Phase="Pending", Reason="", readiness=false. Elapsed: 7.011748ms
Feb 22 15:02:14.791: INFO: Pod "pod-projected-configmaps-c5853e74-2bbe-44c2-aefd-843ae50130ef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023366339s
Feb 22 15:02:16.802: INFO: Pod "pod-projected-configmaps-c5853e74-2bbe-44c2-aefd-843ae50130ef": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034670064s
Feb 22 15:02:18.819: INFO: Pod "pod-projected-configmaps-c5853e74-2bbe-44c2-aefd-843ae50130ef": Phase="Pending", Reason="", readiness=false. Elapsed: 6.051146595s
Feb 22 15:02:20.832: INFO: Pod "pod-projected-configmaps-c5853e74-2bbe-44c2-aefd-843ae50130ef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.064927597s
STEP: Saw pod success
Feb 22 15:02:20.833: INFO: Pod "pod-projected-configmaps-c5853e74-2bbe-44c2-aefd-843ae50130ef" satisfied condition "success or failure"
Feb 22 15:02:20.838: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-c5853e74-2bbe-44c2-aefd-843ae50130ef container projected-configmap-volume-test: 
STEP: delete the pod
Feb 22 15:02:20.964: INFO: Waiting for pod pod-projected-configmaps-c5853e74-2bbe-44c2-aefd-843ae50130ef to disappear
Feb 22 15:02:20.975: INFO: Pod pod-projected-configmaps-c5853e74-2bbe-44c2-aefd-843ae50130ef no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 15:02:20.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5433" for this suite.
Feb 22 15:02:27.077: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 15:02:27.170: INFO: namespace projected-5433 deletion completed in 6.184589673s

• [SLOW TEST:14.676 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 15:02:27.171: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on node default medium
Feb 22 15:02:27.301: INFO: Waiting up to 5m0s for pod "pod-c43b7889-9966-444b-9821-16c1d36868f9" in namespace "emptydir-2248" to be "success or failure"
Feb 22 15:02:27.320: INFO: Pod "pod-c43b7889-9966-444b-9821-16c1d36868f9": Phase="Pending", Reason="", readiness=false. Elapsed: 17.847423ms
Feb 22 15:02:29.337: INFO: Pod "pod-c43b7889-9966-444b-9821-16c1d36868f9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035038184s
Feb 22 15:02:31.390: INFO: Pod "pod-c43b7889-9966-444b-9821-16c1d36868f9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.088546404s
Feb 22 15:02:33.398: INFO: Pod "pod-c43b7889-9966-444b-9821-16c1d36868f9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.096068118s
Feb 22 15:02:35.405: INFO: Pod "pod-c43b7889-9966-444b-9821-16c1d36868f9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.10376125s
Feb 22 15:02:37.439: INFO: Pod "pod-c43b7889-9966-444b-9821-16c1d36868f9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.136836868s
STEP: Saw pod success
Feb 22 15:02:37.439: INFO: Pod "pod-c43b7889-9966-444b-9821-16c1d36868f9" satisfied condition "success or failure"
Feb 22 15:02:37.443: INFO: Trying to get logs from node iruya-node pod pod-c43b7889-9966-444b-9821-16c1d36868f9 container test-container: 
STEP: delete the pod
Feb 22 15:02:37.515: INFO: Waiting for pod pod-c43b7889-9966-444b-9821-16c1d36868f9 to disappear
Feb 22 15:02:37.519: INFO: Pod pod-c43b7889-9966-444b-9821-16c1d36868f9 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 15:02:37.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2248" for this suite.
Feb 22 15:02:43.605: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 15:02:43.857: INFO: namespace emptydir-2248 deletion completed in 6.331854706s

• [SLOW TEST:16.687 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 15:02:43.859: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 22 15:02:44.128: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Feb 22 15:02:44.207: INFO: Number of nodes with available pods: 0
Feb 22 15:02:44.207: INFO: Node iruya-node is running more than one daemon pod
Feb 22 15:02:45.229: INFO: Number of nodes with available pods: 0
Feb 22 15:02:45.229: INFO: Node iruya-node is running more than one daemon pod
Feb 22 15:02:46.288: INFO: Number of nodes with available pods: 0
Feb 22 15:02:46.288: INFO: Node iruya-node is running more than one daemon pod
Feb 22 15:02:47.404: INFO: Number of nodes with available pods: 0
Feb 22 15:02:47.405: INFO: Node iruya-node is running more than one daemon pod
Feb 22 15:02:48.220: INFO: Number of nodes with available pods: 0
Feb 22 15:02:48.220: INFO: Node iruya-node is running more than one daemon pod
Feb 22 15:02:49.236: INFO: Number of nodes with available pods: 0
Feb 22 15:02:49.236: INFO: Node iruya-node is running more than one daemon pod
Feb 22 15:02:51.273: INFO: Number of nodes with available pods: 0
Feb 22 15:02:51.273: INFO: Node iruya-node is running more than one daemon pod
Feb 22 15:02:52.230: INFO: Number of nodes with available pods: 0
Feb 22 15:02:52.230: INFO: Node iruya-node is running more than one daemon pod
Feb 22 15:02:53.246: INFO: Number of nodes with available pods: 0
Feb 22 15:02:53.246: INFO: Node iruya-node is running more than one daemon pod
Feb 22 15:02:54.231: INFO: Number of nodes with available pods: 0
Feb 22 15:02:54.231: INFO: Node iruya-node is running more than one daemon pod
Feb 22 15:02:55.226: INFO: Number of nodes with available pods: 2
Feb 22 15:02:55.226: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Feb 22 15:02:55.325: INFO: Wrong image for pod: daemon-set-852m6. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 22 15:02:55.325: INFO: Wrong image for pod: daemon-set-bj7dr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 22 15:02:56.354: INFO: Wrong image for pod: daemon-set-852m6. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 22 15:02:56.354: INFO: Wrong image for pod: daemon-set-bj7dr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 22 15:02:57.356: INFO: Wrong image for pod: daemon-set-852m6. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 22 15:02:57.356: INFO: Wrong image for pod: daemon-set-bj7dr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 22 15:02:58.356: INFO: Wrong image for pod: daemon-set-852m6. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 22 15:02:58.356: INFO: Wrong image for pod: daemon-set-bj7dr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 22 15:02:59.644: INFO: Wrong image for pod: daemon-set-852m6. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 22 15:02:59.645: INFO: Wrong image for pod: daemon-set-bj7dr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 22 15:03:00.350: INFO: Wrong image for pod: daemon-set-852m6. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 22 15:03:00.350: INFO: Wrong image for pod: daemon-set-bj7dr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 22 15:03:01.350: INFO: Wrong image for pod: daemon-set-852m6. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 22 15:03:01.350: INFO: Pod daemon-set-852m6 is not available
Feb 22 15:03:01.350: INFO: Wrong image for pod: daemon-set-bj7dr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 22 15:03:02.346: INFO: Wrong image for pod: daemon-set-bj7dr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 22 15:03:02.346: INFO: Pod daemon-set-r5mwz is not available
Feb 22 15:03:03.352: INFO: Wrong image for pod: daemon-set-bj7dr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 22 15:03:03.352: INFO: Pod daemon-set-r5mwz is not available
Feb 22 15:03:04.356: INFO: Wrong image for pod: daemon-set-bj7dr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 22 15:03:04.356: INFO: Pod daemon-set-r5mwz is not available
Feb 22 15:03:05.349: INFO: Wrong image for pod: daemon-set-bj7dr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 22 15:03:05.349: INFO: Pod daemon-set-r5mwz is not available
Feb 22 15:03:06.725: INFO: Wrong image for pod: daemon-set-bj7dr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 22 15:03:06.725: INFO: Pod daemon-set-r5mwz is not available
Feb 22 15:03:07.351: INFO: Wrong image for pod: daemon-set-bj7dr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 22 15:03:07.351: INFO: Pod daemon-set-r5mwz is not available
Feb 22 15:03:08.354: INFO: Wrong image for pod: daemon-set-bj7dr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 22 15:03:08.354: INFO: Pod daemon-set-r5mwz is not available
Feb 22 15:03:09.353: INFO: Wrong image for pod: daemon-set-bj7dr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 22 15:03:09.353: INFO: Pod daemon-set-r5mwz is not available
Feb 22 15:03:10.348: INFO: Wrong image for pod: daemon-set-bj7dr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 22 15:03:11.350: INFO: Wrong image for pod: daemon-set-bj7dr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 22 15:03:12.346: INFO: Wrong image for pod: daemon-set-bj7dr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 22 15:03:13.348: INFO: Wrong image for pod: daemon-set-bj7dr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 22 15:03:14.345: INFO: Wrong image for pod: daemon-set-bj7dr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 22 15:03:15.352: INFO: Wrong image for pod: daemon-set-bj7dr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 22 15:03:15.352: INFO: Pod daemon-set-bj7dr is not available
Feb 22 15:03:16.354: INFO: Pod daemon-set-rml9s is not available
STEP: Check that daemon pods are still running on every node of the cluster.
Feb 22 15:03:16.380: INFO: Number of nodes with available pods: 1
Feb 22 15:03:16.380: INFO: Node iruya-node is running more than one daemon pod
Feb 22 15:03:17.402: INFO: Number of nodes with available pods: 1
Feb 22 15:03:17.402: INFO: Node iruya-node is running more than one daemon pod
Feb 22 15:03:21.897: INFO: Number of nodes with available pods: 1
Feb 22 15:03:21.897: INFO: Node iruya-node is running more than one daemon pod
Feb 22 15:03:22.400: INFO: Number of nodes with available pods: 1
Feb 22 15:03:22.400: INFO: Node iruya-node is running more than one daemon pod
Feb 22 15:03:23.399: INFO: Number of nodes with available pods: 1
Feb 22 15:03:23.400: INFO: Node iruya-node is running more than one daemon pod
Feb 22 15:03:24.394: INFO: Number of nodes with available pods: 1
Feb 22 15:03:24.394: INFO: Node iruya-node is running more than one daemon pod
Feb 22 15:03:25.422: INFO: Number of nodes with available pods: 1
Feb 22 15:03:25.423: INFO: Node iruya-node is running more than one daemon pod
Feb 22 15:03:26.398: INFO: Number of nodes with available pods: 1
Feb 22 15:03:26.398: INFO: Node iruya-node is running more than one daemon pod
Feb 22 15:03:27.397: INFO: Number of nodes with available pods: 1
Feb 22 15:03:27.397: INFO: Node iruya-node is running more than one daemon pod
Feb 22 15:03:28.399: INFO: Number of nodes with available pods: 2
Feb 22 15:03:28.400: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3589, will wait for the garbage collector to delete the pods
Feb 22 15:03:28.519: INFO: Deleting DaemonSet.extensions daemon-set took: 16.60171ms
Feb 22 15:03:28.820: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.783765ms
Feb 22 15:03:46.632: INFO: Number of nodes with available pods: 0
Feb 22 15:03:46.632: INFO: Number of running nodes: 0, number of available pods: 0
Feb 22 15:03:46.639: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3589/daemonsets","resourceVersion":"25341148"},"items":null}

Feb 22 15:03:46.643: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3589/pods","resourceVersion":"25341148"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 15:03:46.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-3589" for this suite.
Feb 22 15:03:52.684: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 15:03:52.809: INFO: namespace daemonsets-3589 deletion completed in 6.146782678s

• [SLOW TEST:68.949 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 15:03:52.810: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a replication controller
Feb 22 15:03:52.958: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7863'
Feb 22 15:03:53.471: INFO: stderr: ""
Feb 22 15:03:53.471: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb 22 15:03:53.472: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7863'
Feb 22 15:03:53.700: INFO: stderr: ""
Feb 22 15:03:53.700: INFO: stdout: "update-demo-nautilus-5bd8c update-demo-nautilus-92rhm "
Feb 22 15:03:53.700: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5bd8c -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7863'
Feb 22 15:03:53.978: INFO: stderr: ""
Feb 22 15:03:53.978: INFO: stdout: ""
Feb 22 15:03:53.978: INFO: update-demo-nautilus-5bd8c is created but not running
Feb 22 15:03:58.979: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7863'
Feb 22 15:03:59.165: INFO: stderr: ""
Feb 22 15:03:59.165: INFO: stdout: "update-demo-nautilus-5bd8c update-demo-nautilus-92rhm "
Feb 22 15:03:59.166: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5bd8c -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7863'
Feb 22 15:04:03.003: INFO: stderr: ""
Feb 22 15:04:03.003: INFO: stdout: ""
Feb 22 15:04:03.003: INFO: update-demo-nautilus-5bd8c is created but not running
Feb 22 15:04:08.004: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7863'
Feb 22 15:04:08.472: INFO: stderr: ""
Feb 22 15:04:08.472: INFO: stdout: "update-demo-nautilus-5bd8c update-demo-nautilus-92rhm "
Feb 22 15:04:08.473: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5bd8c -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7863'
Feb 22 15:04:08.755: INFO: stderr: ""
Feb 22 15:04:08.755: INFO: stdout: ""
Feb 22 15:04:08.755: INFO: update-demo-nautilus-5bd8c is created but not running
Feb 22 15:04:13.756: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7863'
Feb 22 15:04:13.926: INFO: stderr: ""
Feb 22 15:04:13.927: INFO: stdout: "update-demo-nautilus-5bd8c update-demo-nautilus-92rhm "
Feb 22 15:04:13.927: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5bd8c -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7863'
Feb 22 15:04:14.161: INFO: stderr: ""
Feb 22 15:04:14.161: INFO: stdout: "true"
Feb 22 15:04:14.161: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5bd8c -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7863'
Feb 22 15:04:14.240: INFO: stderr: ""
Feb 22 15:04:14.241: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 22 15:04:14.241: INFO: validating pod update-demo-nautilus-5bd8c
Feb 22 15:04:14.264: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 22 15:04:14.264: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 22 15:04:14.264: INFO: update-demo-nautilus-5bd8c is verified up and running
Feb 22 15:04:14.264: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-92rhm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7863'
Feb 22 15:04:14.342: INFO: stderr: ""
Feb 22 15:04:14.342: INFO: stdout: "true"
Feb 22 15:04:14.343: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-92rhm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7863'
Feb 22 15:04:14.417: INFO: stderr: ""
Feb 22 15:04:14.417: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 22 15:04:14.417: INFO: validating pod update-demo-nautilus-92rhm
Feb 22 15:04:14.426: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 22 15:04:14.426: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 22 15:04:14.426: INFO: update-demo-nautilus-92rhm is verified up and running
STEP: scaling down the replication controller
Feb 22 15:04:14.428: INFO: scanned /root for discovery docs: 
Feb 22 15:04:14.428: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-7863'
Feb 22 15:04:15.624: INFO: stderr: ""
Feb 22 15:04:15.624: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb 22 15:04:15.625: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7863'
Feb 22 15:04:15.754: INFO: stderr: ""
Feb 22 15:04:15.754: INFO: stdout: "update-demo-nautilus-5bd8c update-demo-nautilus-92rhm "
STEP: Replicas for name=update-demo: expected=1 actual=2
Feb 22 15:04:20.754: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7863'
Feb 22 15:04:20.924: INFO: stderr: ""
Feb 22 15:04:20.924: INFO: stdout: "update-demo-nautilus-5bd8c update-demo-nautilus-92rhm "
STEP: Replicas for name=update-demo: expected=1 actual=2
Feb 22 15:04:25.926: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7863'
Feb 22 15:04:26.115: INFO: stderr: ""
Feb 22 15:04:26.115: INFO: stdout: "update-demo-nautilus-5bd8c update-demo-nautilus-92rhm "
STEP: Replicas for name=update-demo: expected=1 actual=2
Feb 22 15:04:31.116: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7863'
Feb 22 15:04:31.261: INFO: stderr: ""
Feb 22 15:04:31.261: INFO: stdout: "update-demo-nautilus-5bd8c "
Feb 22 15:04:31.261: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5bd8c -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7863'
Feb 22 15:04:31.403: INFO: stderr: ""
Feb 22 15:04:31.403: INFO: stdout: "true"
Feb 22 15:04:31.403: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5bd8c -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7863'
Feb 22 15:04:31.533: INFO: stderr: ""
Feb 22 15:04:31.533: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 22 15:04:31.534: INFO: validating pod update-demo-nautilus-5bd8c
Feb 22 15:04:31.541: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 22 15:04:31.541: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 22 15:04:31.541: INFO: update-demo-nautilus-5bd8c is verified up and running
STEP: scaling up the replication controller
Feb 22 15:04:31.543: INFO: scanned /root for discovery docs: 
Feb 22 15:04:31.543: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-7863'
Feb 22 15:04:32.745: INFO: stderr: ""
Feb 22 15:04:32.746: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb 22 15:04:32.746: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7863'
Feb 22 15:04:32.905: INFO: stderr: ""
Feb 22 15:04:32.905: INFO: stdout: "update-demo-nautilus-5bd8c update-demo-nautilus-7rjsp "
Feb 22 15:04:32.906: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5bd8c -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7863'
Feb 22 15:04:33.024: INFO: stderr: ""
Feb 22 15:04:33.025: INFO: stdout: "true"
Feb 22 15:04:33.025: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5bd8c -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7863'
Feb 22 15:04:33.201: INFO: stderr: ""
Feb 22 15:04:33.201: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 22 15:04:33.201: INFO: validating pod update-demo-nautilus-5bd8c
Feb 22 15:04:33.208: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 22 15:04:33.209: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 22 15:04:33.209: INFO: update-demo-nautilus-5bd8c is verified up and running
Feb 22 15:04:33.209: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7rjsp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7863'
Feb 22 15:04:33.296: INFO: stderr: ""
Feb 22 15:04:33.297: INFO: stdout: ""
Feb 22 15:04:33.297: INFO: update-demo-nautilus-7rjsp is created but not running
Feb 22 15:04:38.297: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7863'
Feb 22 15:04:38.550: INFO: stderr: ""
Feb 22 15:04:38.550: INFO: stdout: "update-demo-nautilus-5bd8c update-demo-nautilus-7rjsp "
Feb 22 15:04:38.550: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5bd8c -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7863'
Feb 22 15:04:38.725: INFO: stderr: ""
Feb 22 15:04:38.726: INFO: stdout: "true"
Feb 22 15:04:38.726: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5bd8c -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7863'
Feb 22 15:04:38.882: INFO: stderr: ""
Feb 22 15:04:38.882: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 22 15:04:38.882: INFO: validating pod update-demo-nautilus-5bd8c
Feb 22 15:04:38.898: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 22 15:04:38.898: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 22 15:04:38.898: INFO: update-demo-nautilus-5bd8c is verified up and running
Feb 22 15:04:38.898: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7rjsp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7863'
Feb 22 15:04:38.995: INFO: stderr: ""
Feb 22 15:04:38.995: INFO: stdout: ""
Feb 22 15:04:38.995: INFO: update-demo-nautilus-7rjsp is created but not running
Feb 22 15:04:43.996: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7863'
Feb 22 15:04:44.155: INFO: stderr: ""
Feb 22 15:04:44.155: INFO: stdout: "update-demo-nautilus-5bd8c update-demo-nautilus-7rjsp "
Feb 22 15:04:44.156: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5bd8c -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7863'
Feb 22 15:04:44.241: INFO: stderr: ""
Feb 22 15:04:44.241: INFO: stdout: "true"
Feb 22 15:04:44.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5bd8c -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7863'
Feb 22 15:04:44.319: INFO: stderr: ""
Feb 22 15:04:44.319: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 22 15:04:44.319: INFO: validating pod update-demo-nautilus-5bd8c
Feb 22 15:04:44.326: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 22 15:04:44.326: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 22 15:04:44.326: INFO: update-demo-nautilus-5bd8c is verified up and running
Feb 22 15:04:44.326: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7rjsp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7863'
Feb 22 15:04:44.400: INFO: stderr: ""
Feb 22 15:04:44.400: INFO: stdout: "true"
Feb 22 15:04:44.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7rjsp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7863'
Feb 22 15:04:44.479: INFO: stderr: ""
Feb 22 15:04:44.480: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 22 15:04:44.480: INFO: validating pod update-demo-nautilus-7rjsp
Feb 22 15:04:44.505: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 22 15:04:44.506: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 22 15:04:44.506: INFO: update-demo-nautilus-7rjsp is verified up and running
STEP: using delete to clean up resources
Feb 22 15:04:44.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7863'
Feb 22 15:04:44.671: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 22 15:04:44.671: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Feb 22 15:04:44.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-7863'
Feb 22 15:04:44.804: INFO: stderr: "No resources found.\n"
Feb 22 15:04:44.804: INFO: stdout: ""
Feb 22 15:04:44.804: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-7863 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb 22 15:04:45.046: INFO: stderr: ""
Feb 22 15:04:45.046: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 15:04:45.047: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7863" for this suite.
Feb 22 15:05:07.140: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 15:05:07.217: INFO: namespace kubectl-7863 deletion completed in 22.15849964s

• [SLOW TEST:74.408 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should scale a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 15:05:07.218: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test substitution in container's args
Feb 22 15:05:07.351: INFO: Waiting up to 5m0s for pod "var-expansion-c1ba4c17-b927-4bb7-95ae-07231a5c5dc8" in namespace "var-expansion-2011" to be "success or failure"
Feb 22 15:05:07.441: INFO: Pod "var-expansion-c1ba4c17-b927-4bb7-95ae-07231a5c5dc8": Phase="Pending", Reason="", readiness=false. Elapsed: 89.82457ms
Feb 22 15:05:09.456: INFO: Pod "var-expansion-c1ba4c17-b927-4bb7-95ae-07231a5c5dc8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.105241842s
Feb 22 15:05:11.473: INFO: Pod "var-expansion-c1ba4c17-b927-4bb7-95ae-07231a5c5dc8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.122723777s
Feb 22 15:05:13.489: INFO: Pod "var-expansion-c1ba4c17-b927-4bb7-95ae-07231a5c5dc8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.138412752s
Feb 22 15:05:15.500: INFO: Pod "var-expansion-c1ba4c17-b927-4bb7-95ae-07231a5c5dc8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.14882161s
Feb 22 15:05:17.508: INFO: Pod "var-expansion-c1ba4c17-b927-4bb7-95ae-07231a5c5dc8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.156776109s
STEP: Saw pod success
Feb 22 15:05:17.508: INFO: Pod "var-expansion-c1ba4c17-b927-4bb7-95ae-07231a5c5dc8" satisfied condition "success or failure"
Feb 22 15:05:17.511: INFO: Trying to get logs from node iruya-node pod var-expansion-c1ba4c17-b927-4bb7-95ae-07231a5c5dc8 container dapi-container: 
STEP: delete the pod
Feb 22 15:05:17.551: INFO: Waiting for pod var-expansion-c1ba4c17-b927-4bb7-95ae-07231a5c5dc8 to disappear
Feb 22 15:05:17.557: INFO: Pod var-expansion-c1ba4c17-b927-4bb7-95ae-07231a5c5dc8 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 15:05:17.557: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-2011" for this suite.
Feb 22 15:05:23.597: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 15:05:23.751: INFO: namespace var-expansion-2011 deletion completed in 6.185655771s

• [SLOW TEST:16.533 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 15:05:23.752: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 22 15:05:23.909: INFO: Creating ReplicaSet my-hostname-basic-5bdd5e9e-5b81-4ad6-8cb4-ce216c1fa3ec
Feb 22 15:05:23.953: INFO: Pod name my-hostname-basic-5bdd5e9e-5b81-4ad6-8cb4-ce216c1fa3ec: Found 0 pods out of 1
Feb 22 15:05:28.966: INFO: Pod name my-hostname-basic-5bdd5e9e-5b81-4ad6-8cb4-ce216c1fa3ec: Found 1 pods out of 1
Feb 22 15:05:28.967: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-5bdd5e9e-5b81-4ad6-8cb4-ce216c1fa3ec" is running
Feb 22 15:05:33.130: INFO: Pod "my-hostname-basic-5bdd5e9e-5b81-4ad6-8cb4-ce216c1fa3ec-mr4bl" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-22 15:05:24 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-22 15:05:24 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-5bdd5e9e-5b81-4ad6-8cb4-ce216c1fa3ec]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-22 15:05:24 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-5bdd5e9e-5b81-4ad6-8cb4-ce216c1fa3ec]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-22 15:05:23 +0000 UTC Reason: Message:}])
Feb 22 15:05:33.131: INFO: Trying to dial the pod
Feb 22 15:05:38.201: INFO: Controller my-hostname-basic-5bdd5e9e-5b81-4ad6-8cb4-ce216c1fa3ec: Got expected result from replica 1 [my-hostname-basic-5bdd5e9e-5b81-4ad6-8cb4-ce216c1fa3ec-mr4bl]: "my-hostname-basic-5bdd5e9e-5b81-4ad6-8cb4-ce216c1fa3ec-mr4bl", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 15:05:38.201: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-1295" for this suite.
Feb 22 15:05:44.241: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 15:05:44.345: INFO: namespace replicaset-1295 deletion completed in 6.134369092s

• [SLOW TEST:20.594 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 15:05:44.346: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 15:05:44.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-5222" for this suite.
Feb 22 15:05:50.438: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 15:05:50.594: INFO: namespace services-5222 deletion completed in 6.17731412s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:6.249 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 15:05:50.596: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on tmpfs
Feb 22 15:05:50.738: INFO: Waiting up to 5m0s for pod "pod-448892aa-99ef-43d6-a27b-7f1e15ebf795" in namespace "emptydir-392" to be "success or failure"
Feb 22 15:05:50.754: INFO: Pod "pod-448892aa-99ef-43d6-a27b-7f1e15ebf795": Phase="Pending", Reason="", readiness=false. Elapsed: 15.235953ms
Feb 22 15:05:52.761: INFO: Pod "pod-448892aa-99ef-43d6-a27b-7f1e15ebf795": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022910307s
Feb 22 15:05:54.841: INFO: Pod "pod-448892aa-99ef-43d6-a27b-7f1e15ebf795": Phase="Pending", Reason="", readiness=false. Elapsed: 4.103035819s
Feb 22 15:05:56.853: INFO: Pod "pod-448892aa-99ef-43d6-a27b-7f1e15ebf795": Phase="Pending", Reason="", readiness=false. Elapsed: 6.114824148s
Feb 22 15:05:58.867: INFO: Pod "pod-448892aa-99ef-43d6-a27b-7f1e15ebf795": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.128649687s
STEP: Saw pod success
Feb 22 15:05:58.867: INFO: Pod "pod-448892aa-99ef-43d6-a27b-7f1e15ebf795" satisfied condition "success or failure"
Feb 22 15:05:58.871: INFO: Trying to get logs from node iruya-node pod pod-448892aa-99ef-43d6-a27b-7f1e15ebf795 container test-container: 
STEP: delete the pod
Feb 22 15:05:58.988: INFO: Waiting for pod pod-448892aa-99ef-43d6-a27b-7f1e15ebf795 to disappear
Feb 22 15:05:58.995: INFO: Pod pod-448892aa-99ef-43d6-a27b-7f1e15ebf795 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 15:05:58.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-392" for this suite.
Feb 22 15:06:05.027: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 15:06:05.172: INFO: namespace emptydir-392 deletion completed in 6.167921402s

• [SLOW TEST:14.577 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 15:06:05.173: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 22 15:06:05.319: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7a6f39b7-9960-4166-b54f-882855a589d7" in namespace "downward-api-5819" to be "success or failure"
Feb 22 15:06:05.331: INFO: Pod "downwardapi-volume-7a6f39b7-9960-4166-b54f-882855a589d7": Phase="Pending", Reason="", readiness=false. Elapsed: 11.919744ms
Feb 22 15:06:07.358: INFO: Pod "downwardapi-volume-7a6f39b7-9960-4166-b54f-882855a589d7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038959452s
Feb 22 15:06:09.368: INFO: Pod "downwardapi-volume-7a6f39b7-9960-4166-b54f-882855a589d7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.048719815s
Feb 22 15:06:11.423: INFO: Pod "downwardapi-volume-7a6f39b7-9960-4166-b54f-882855a589d7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.103988847s
Feb 22 15:06:13.432: INFO: Pod "downwardapi-volume-7a6f39b7-9960-4166-b54f-882855a589d7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.113114645s
Feb 22 15:06:15.466: INFO: Pod "downwardapi-volume-7a6f39b7-9960-4166-b54f-882855a589d7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.146856092s
STEP: Saw pod success
Feb 22 15:06:15.466: INFO: Pod "downwardapi-volume-7a6f39b7-9960-4166-b54f-882855a589d7" satisfied condition "success or failure"
Feb 22 15:06:15.472: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-7a6f39b7-9960-4166-b54f-882855a589d7 container client-container: 
STEP: delete the pod
Feb 22 15:06:15.532: INFO: Waiting for pod downwardapi-volume-7a6f39b7-9960-4166-b54f-882855a589d7 to disappear
Feb 22 15:06:15.540: INFO: Pod downwardapi-volume-7a6f39b7-9960-4166-b54f-882855a589d7 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 15:06:15.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5819" for this suite.
Feb 22 15:06:21.594: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 15:06:21.741: INFO: namespace downward-api-5819 deletion completed in 6.19388438s

• [SLOW TEST:16.568 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 15:06:21.742: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating pod
Feb 22 15:06:31.914: INFO: Pod pod-hostip-d2896ae4-177d-4148-a676-8919c6fd431c has hostIP: 10.96.3.65
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 15:06:31.915: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-5700" for this suite.
Feb 22 15:06:53.944: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 15:06:54.156: INFO: namespace pods-5700 deletion completed in 22.234167895s

• [SLOW TEST:32.415 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 15:06:54.157: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir volume type on node default medium
Feb 22 15:06:54.296: INFO: Waiting up to 5m0s for pod "pod-bf3baac5-dc78-4949-b862-27c77126b3c1" in namespace "emptydir-3720" to be "success or failure"
Feb 22 15:06:54.610: INFO: Pod "pod-bf3baac5-dc78-4949-b862-27c77126b3c1": Phase="Pending", Reason="", readiness=false. Elapsed: 313.910595ms
Feb 22 15:06:56.626: INFO: Pod "pod-bf3baac5-dc78-4949-b862-27c77126b3c1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.329667799s
Feb 22 15:06:58.639: INFO: Pod "pod-bf3baac5-dc78-4949-b862-27c77126b3c1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.342677503s
Feb 22 15:07:00.654: INFO: Pod "pod-bf3baac5-dc78-4949-b862-27c77126b3c1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.358391064s
Feb 22 15:07:02.665: INFO: Pod "pod-bf3baac5-dc78-4949-b862-27c77126b3c1": Phase="Pending", Reason="", readiness=false. Elapsed: 8.368790786s
Feb 22 15:07:04.672: INFO: Pod "pod-bf3baac5-dc78-4949-b862-27c77126b3c1": Phase="Pending", Reason="", readiness=false. Elapsed: 10.375706454s
Feb 22 15:07:07.249: INFO: Pod "pod-bf3baac5-dc78-4949-b862-27c77126b3c1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.952798902s
STEP: Saw pod success
Feb 22 15:07:07.249: INFO: Pod "pod-bf3baac5-dc78-4949-b862-27c77126b3c1" satisfied condition "success or failure"
Feb 22 15:07:07.267: INFO: Trying to get logs from node iruya-node pod pod-bf3baac5-dc78-4949-b862-27c77126b3c1 container test-container: 
STEP: delete the pod
Feb 22 15:07:07.382: INFO: Waiting for pod pod-bf3baac5-dc78-4949-b862-27c77126b3c1 to disappear
Feb 22 15:07:07.389: INFO: Pod pod-bf3baac5-dc78-4949-b862-27c77126b3c1 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 15:07:07.389: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3720" for this suite.
Feb 22 15:07:13.439: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 15:07:13.585: INFO: namespace emptydir-3720 deletion completed in 6.191613953s

• [SLOW TEST:19.429 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 15:07:13.586: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating Redis RC
Feb 22 15:07:13.725: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4628'
Feb 22 15:07:17.026: INFO: stderr: ""
Feb 22 15:07:17.026: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Feb 22 15:07:18.041: INFO: Selector matched 1 pods for map[app:redis]
Feb 22 15:07:18.041: INFO: Found 0 / 1
Feb 22 15:07:19.036: INFO: Selector matched 1 pods for map[app:redis]
Feb 22 15:07:19.036: INFO: Found 0 / 1
Feb 22 15:07:20.042: INFO: Selector matched 1 pods for map[app:redis]
Feb 22 15:07:20.043: INFO: Found 0 / 1
Feb 22 15:07:21.040: INFO: Selector matched 1 pods for map[app:redis]
Feb 22 15:07:21.040: INFO: Found 0 / 1
Feb 22 15:07:22.062: INFO: Selector matched 1 pods for map[app:redis]
Feb 22 15:07:22.062: INFO: Found 0 / 1
Feb 22 15:07:23.043: INFO: Selector matched 1 pods for map[app:redis]
Feb 22 15:07:23.043: INFO: Found 0 / 1
Feb 22 15:07:24.039: INFO: Selector matched 1 pods for map[app:redis]
Feb 22 15:07:24.039: INFO: Found 0 / 1
Feb 22 15:07:25.039: INFO: Selector matched 1 pods for map[app:redis]
Feb 22 15:07:25.039: INFO: Found 0 / 1
Feb 22 15:07:26.039: INFO: Selector matched 1 pods for map[app:redis]
Feb 22 15:07:26.039: INFO: Found 1 / 1
Feb 22 15:07:26.039: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Feb 22 15:07:26.046: INFO: Selector matched 1 pods for map[app:redis]
Feb 22 15:07:26.046: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Feb 22 15:07:26.046: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-kdmvp --namespace=kubectl-4628 -p {"metadata":{"annotations":{"x":"y"}}}'
Feb 22 15:07:26.195: INFO: stderr: ""
Feb 22 15:07:26.195: INFO: stdout: "pod/redis-master-kdmvp patched\n"
STEP: checking annotations
Feb 22 15:07:26.258: INFO: Selector matched 1 pods for map[app:redis]
Feb 22 15:07:26.258: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 15:07:26.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4628" for this suite.
Feb 22 15:07:48.351: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 15:07:48.482: INFO: namespace kubectl-4628 deletion completed in 22.214636583s

• [SLOW TEST:34.896 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl patch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should add annotations for pods in rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 15:07:48.483: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-0ff04b60-f4ff-4d32-92ea-c47769f7a8c5
STEP: Creating a pod to test consume configMaps
Feb 22 15:07:48.651: INFO: Waiting up to 5m0s for pod "pod-configmaps-a7595487-fae8-4ce9-94e8-aaf92ce66a89" in namespace "configmap-9833" to be "success or failure"
Feb 22 15:07:48.673: INFO: Pod "pod-configmaps-a7595487-fae8-4ce9-94e8-aaf92ce66a89": Phase="Pending", Reason="", readiness=false. Elapsed: 21.654822ms
Feb 22 15:07:50.693: INFO: Pod "pod-configmaps-a7595487-fae8-4ce9-94e8-aaf92ce66a89": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041804122s
Feb 22 15:07:52.716: INFO: Pod "pod-configmaps-a7595487-fae8-4ce9-94e8-aaf92ce66a89": Phase="Pending", Reason="", readiness=false. Elapsed: 4.065002037s
Feb 22 15:07:54.721: INFO: Pod "pod-configmaps-a7595487-fae8-4ce9-94e8-aaf92ce66a89": Phase="Pending", Reason="", readiness=false. Elapsed: 6.070034077s
Feb 22 15:07:57.354: INFO: Pod "pod-configmaps-a7595487-fae8-4ce9-94e8-aaf92ce66a89": Phase="Pending", Reason="", readiness=false. Elapsed: 8.702933568s
Feb 22 15:07:59.367: INFO: Pod "pod-configmaps-a7595487-fae8-4ce9-94e8-aaf92ce66a89": Phase="Pending", Reason="", readiness=false. Elapsed: 10.715945736s
Feb 22 15:08:01.382: INFO: Pod "pod-configmaps-a7595487-fae8-4ce9-94e8-aaf92ce66a89": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.730800439s
STEP: Saw pod success
Feb 22 15:08:01.383: INFO: Pod "pod-configmaps-a7595487-fae8-4ce9-94e8-aaf92ce66a89" satisfied condition "success or failure"
Feb 22 15:08:01.391: INFO: Trying to get logs from node iruya-node pod pod-configmaps-a7595487-fae8-4ce9-94e8-aaf92ce66a89 container configmap-volume-test: 
STEP: delete the pod
Feb 22 15:08:01.527: INFO: Waiting for pod pod-configmaps-a7595487-fae8-4ce9-94e8-aaf92ce66a89 to disappear
Feb 22 15:08:01.561: INFO: Pod pod-configmaps-a7595487-fae8-4ce9-94e8-aaf92ce66a89 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 15:08:01.561: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9833" for this suite.
Feb 22 15:08:07.612: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 15:08:07.733: INFO: namespace configmap-9833 deletion completed in 6.144077001s

• [SLOW TEST:19.250 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 15:08:07.733: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 22 15:08:07.903: INFO: Waiting up to 5m0s for pod "downwardapi-volume-45025be5-423e-41fa-827d-43dd08818258" in namespace "downward-api-885" to be "success or failure"
Feb 22 15:08:07.921: INFO: Pod "downwardapi-volume-45025be5-423e-41fa-827d-43dd08818258": Phase="Pending", Reason="", readiness=false. Elapsed: 17.988098ms
Feb 22 15:08:09.934: INFO: Pod "downwardapi-volume-45025be5-423e-41fa-827d-43dd08818258": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030787692s
Feb 22 15:08:11.947: INFO: Pod "downwardapi-volume-45025be5-423e-41fa-827d-43dd08818258": Phase="Pending", Reason="", readiness=false. Elapsed: 4.0432479s
Feb 22 15:08:13.974: INFO: Pod "downwardapi-volume-45025be5-423e-41fa-827d-43dd08818258": Phase="Pending", Reason="", readiness=false. Elapsed: 6.070319846s
Feb 22 15:08:16.020: INFO: Pod "downwardapi-volume-45025be5-423e-41fa-827d-43dd08818258": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.116417455s
STEP: Saw pod success
Feb 22 15:08:16.020: INFO: Pod "downwardapi-volume-45025be5-423e-41fa-827d-43dd08818258" satisfied condition "success or failure"
Feb 22 15:08:16.030: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-45025be5-423e-41fa-827d-43dd08818258 container client-container: 
STEP: delete the pod
Feb 22 15:08:16.215: INFO: Waiting for pod downwardapi-volume-45025be5-423e-41fa-827d-43dd08818258 to disappear
Feb 22 15:08:16.225: INFO: Pod downwardapi-volume-45025be5-423e-41fa-827d-43dd08818258 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 15:08:16.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-885" for this suite.
Feb 22 15:08:22.315: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 15:08:22.401: INFO: namespace downward-api-885 deletion completed in 6.167931635s

• [SLOW TEST:14.668 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 15:08:22.403: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 22 15:08:22.509: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7136'
Feb 22 15:08:22.898: INFO: stderr: ""
Feb 22 15:08:22.898: INFO: stdout: "replicationcontroller/redis-master created\n"
Feb 22 15:08:22.898: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7136'
Feb 22 15:08:24.800: INFO: stderr: ""
Feb 22 15:08:24.801: INFO: stdout: "service/redis-master created\n"
STEP: Waiting for Redis master to start.
Feb 22 15:08:25.819: INFO: Selector matched 1 pods for map[app:redis]
Feb 22 15:08:25.819: INFO: Found 0 / 1
Feb 22 15:08:26.816: INFO: Selector matched 1 pods for map[app:redis]
Feb 22 15:08:26.816: INFO: Found 0 / 1
Feb 22 15:08:27.813: INFO: Selector matched 1 pods for map[app:redis]
Feb 22 15:08:27.813: INFO: Found 0 / 1
Feb 22 15:08:28.816: INFO: Selector matched 1 pods for map[app:redis]
Feb 22 15:08:28.816: INFO: Found 0 / 1
Feb 22 15:08:29.972: INFO: Selector matched 1 pods for map[app:redis]
Feb 22 15:08:29.972: INFO: Found 0 / 1
Feb 22 15:08:30.823: INFO: Selector matched 1 pods for map[app:redis]
Feb 22 15:08:30.824: INFO: Found 0 / 1
Feb 22 15:08:31.811: INFO: Selector matched 1 pods for map[app:redis]
Feb 22 15:08:31.811: INFO: Found 0 / 1
Feb 22 15:08:32.811: INFO: Selector matched 1 pods for map[app:redis]
Feb 22 15:08:32.811: INFO: Found 1 / 1
Feb 22 15:08:32.811: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Feb 22 15:08:32.816: INFO: Selector matched 1 pods for map[app:redis]
Feb 22 15:08:32.816: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Feb 22 15:08:32.816: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-fsw7c --namespace=kubectl-7136'
Feb 22 15:08:33.034: INFO: stderr: ""
Feb 22 15:08:33.034: INFO: stdout: "Name:           redis-master-fsw7c\nNamespace:      kubectl-7136\nPriority:       0\nNode:           iruya-node/10.96.3.65\nStart Time:     Sat, 22 Feb 2020 15:08:22 +0000\nLabels:         app=redis\n                role=master\nAnnotations:    \nStatus:         Running\nIP:             10.44.0.1\nControlled By:  ReplicationController/redis-master\nContainers:\n  redis-master:\n    Container ID:   docker://c411d72f1ed2374c9d1756fa3d42ced819a9cb47a473088085cd3a32011b3604\n    Image:          gcr.io/kubernetes-e2e-test-images/redis:1.0\n    Image ID:       docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Sat, 22 Feb 2020 15:08:31 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-9cq4t (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-9cq4t:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-9cq4t\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason     Age   From                 Message\n  ----    ------     ----  ----                 -------\n  Normal  Scheduled  11s   default-scheduler    Successfully assigned kubectl-7136/redis-master-fsw7c to iruya-node\n  Normal  Pulled     5s    kubelet, iruya-node  Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n  Normal  Created    3s    kubelet, iruya-node  Created container redis-master\n  Normal  Started    2s    kubelet, iruya-node  Started container redis-master\n"
Feb 22 15:08:33.034: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=kubectl-7136'
Feb 22 15:08:33.180: INFO: stderr: ""
Feb 22 15:08:33.180: INFO: stdout: "Name:         redis-master\nNamespace:    kubectl-7136\nSelector:     app=redis,role=master\nLabels:       app=redis\n              role=master\nAnnotations:  \nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=redis\n           role=master\n  Containers:\n   redis-master:\n    Image:        gcr.io/kubernetes-e2e-test-images/redis:1.0\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  \n    Mounts:       \n  Volumes:        \nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  11s   replication-controller  Created pod: redis-master-fsw7c\n"
Feb 22 15:08:33.180: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=kubectl-7136'
Feb 22 15:08:33.298: INFO: stderr: ""
Feb 22 15:08:33.298: INFO: stdout: "Name:              redis-master\nNamespace:         kubectl-7136\nLabels:            app=redis\n                   role=master\nAnnotations:       \nSelector:          app=redis,role=master\nType:              ClusterIP\nIP:                10.108.80.220\nPort:                6379/TCP\nTargetPort:        redis-server/TCP\nEndpoints:         10.44.0.1:6379\nSession Affinity:  None\nEvents:            \n"
Feb 22 15:08:33.305: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node iruya-node'
Feb 22 15:08:33.428: INFO: stderr: ""
Feb 22 15:08:33.428: INFO: stdout: "Name:               iruya-node\nRoles:              \nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=iruya-node\n                    kubernetes.io/os=linux\nAnnotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Sun, 04 Aug 2019 09:01:39 +0000\nTaints:             \nUnschedulable:      false\nConditions:\n  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----                 ------  -----------------                 ------------------                ------                       -------\n  NetworkUnavailable   False   Sat, 12 Oct 2019 11:56:49 +0000   Sat, 12 Oct 2019 11:56:49 +0000   WeaveIsUp                    Weave pod has set this\n  MemoryPressure       False   Sat, 22 Feb 2020 15:08:31 +0000   Sun, 04 Aug 2019 09:01:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure         False   Sat, 22 Feb 2020 15:08:31 +0000   Sun, 04 Aug 2019 09:01:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure          False   Sat, 22 Feb 2020 15:08:31 +0000   Sun, 04 Aug 2019 09:01:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready                True    Sat, 22 Feb 2020 15:08:31 +0000   Sun, 04 Aug 2019 09:02:19 +0000   KubeletReady                 kubelet is posting ready status. AppArmor enabled\nAddresses:\n  InternalIP:  10.96.3.65\n  Hostname:    iruya-node\nCapacity:\n cpu:                4\n ephemeral-storage:  20145724Ki\n hugepages-2Mi:      0\n memory:             4039076Ki\n pods:               110\nAllocatable:\n cpu:                4\n ephemeral-storage:  18566299208\n hugepages-2Mi:      0\n memory:             3936676Ki\n pods:               110\nSystem Info:\n Machine ID:                 f573dcf04d6f4a87856a35d266a2fa7a\n System UUID:                F573DCF0-4D6F-4A87-856A-35D266A2FA7A\n Boot ID:                    8baf4beb-8391-43e6-b17b-b1e184b5370a\n Kernel Version:             4.15.0-52-generic\n OS Image:                   Ubuntu 18.04.2 LTS\n Operating System:           linux\n Architecture:               amd64\n Container Runtime Version:  docker://18.9.7\n Kubelet Version:            v1.15.1\n Kube-Proxy Version:         v1.15.1\nPodCIDR:                     10.96.1.0/24\nNon-terminated Pods:         (3 in total)\n  Namespace                  Name                  CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                  ----                  ------------  ----------  ---------------  -------------  ---\n  kube-system                kube-proxy-976zl      0 (0%)        0 (0%)      0 (0%)           0 (0%)         202d\n  kube-system                weave-net-rlp57       20m (0%)      0 (0%)      0 (0%)           0 (0%)         133d\n  kubectl-7136               redis-master-fsw7c    0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests  Limits\n  --------           --------  ------\n  cpu                20m (0%)  0 (0%)\n  memory             0 (0%)    0 (0%)\n  ephemeral-storage  0 (0%)    0 (0%)\nEvents:              \n"
Feb 22 15:08:33.429: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-7136'
Feb 22 15:08:33.534: INFO: stderr: ""
Feb 22 15:08:33.535: INFO: stdout: "Name:         kubectl-7136\nLabels:       e2e-framework=kubectl\n              e2e-run=6fd81cb8-7201-4007-8ed7-6e093311ea59\nAnnotations:  \nStatus:       Active\n\nNo resource quota.\n\nNo resource limits.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 15:08:33.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7136" for this suite.
Feb 22 15:08:55.623: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 15:08:55.734: INFO: namespace kubectl-7136 deletion completed in 22.193037937s

• [SLOW TEST:33.331 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl describe
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if kubectl describe prints relevant information for rc and pods  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 15:08:55.736: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-upd-319c5365-bc45-41af-9d2a-fab1c433cf22
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-319c5365-bc45-41af-9d2a-fab1c433cf22
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 15:10:14.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6500" for this suite.
Feb 22 15:10:30.065: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 15:10:30.184: INFO: namespace configmap-6500 deletion completed in 16.166910832s

• [SLOW TEST:94.448 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 15:10:30.185: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-b0645647-7a7c-4125-a334-831fc576978e
STEP: Creating a pod to test consume configMaps
Feb 22 15:10:30.377: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-a9456510-3817-4104-9b7c-2ae81a02c710" in namespace "projected-4560" to be "success or failure"
Feb 22 15:10:30.393: INFO: Pod "pod-projected-configmaps-a9456510-3817-4104-9b7c-2ae81a02c710": Phase="Pending", Reason="", readiness=false. Elapsed: 16.251673ms
Feb 22 15:10:32.401: INFO: Pod "pod-projected-configmaps-a9456510-3817-4104-9b7c-2ae81a02c710": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023915858s
Feb 22 15:10:34.408: INFO: Pod "pod-projected-configmaps-a9456510-3817-4104-9b7c-2ae81a02c710": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031229194s
Feb 22 15:10:36.421: INFO: Pod "pod-projected-configmaps-a9456510-3817-4104-9b7c-2ae81a02c710": Phase="Pending", Reason="", readiness=false. Elapsed: 6.043792583s
Feb 22 15:10:38.449: INFO: Pod "pod-projected-configmaps-a9456510-3817-4104-9b7c-2ae81a02c710": Phase="Pending", Reason="", readiness=false. Elapsed: 8.072249473s
Feb 22 15:10:40.462: INFO: Pod "pod-projected-configmaps-a9456510-3817-4104-9b7c-2ae81a02c710": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.084950928s
STEP: Saw pod success
Feb 22 15:10:40.462: INFO: Pod "pod-projected-configmaps-a9456510-3817-4104-9b7c-2ae81a02c710" satisfied condition "success or failure"
Feb 22 15:10:40.467: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-a9456510-3817-4104-9b7c-2ae81a02c710 container projected-configmap-volume-test: 
STEP: delete the pod
Feb 22 15:10:40.610: INFO: Waiting for pod pod-projected-configmaps-a9456510-3817-4104-9b7c-2ae81a02c710 to disappear
Feb 22 15:10:40.617: INFO: Pod pod-projected-configmaps-a9456510-3817-4104-9b7c-2ae81a02c710 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 15:10:40.617: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4560" for this suite.
Feb 22 15:10:46.654: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 15:10:46.783: INFO: namespace projected-4560 deletion completed in 6.15812061s

• [SLOW TEST:16.598 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 15:10:46.784: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Feb 22 15:11:07.082: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 22 15:11:07.129: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 22 15:11:09.130: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 22 15:11:09.147: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 22 15:11:11.130: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 22 15:11:11.135: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 22 15:11:13.129: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 22 15:11:13.166: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 22 15:11:15.129: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 22 15:11:15.134: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 22 15:11:17.129: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 22 15:11:17.137: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 22 15:11:19.130: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 22 15:11:19.370: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 22 15:11:21.130: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 22 15:11:21.138: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 22 15:11:23.129: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 22 15:11:23.135: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 22 15:11:25.129: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 22 15:11:25.135: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 22 15:11:27.130: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 22 15:11:27.137: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 22 15:11:29.130: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 22 15:11:29.181: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 22 15:11:31.132: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 22 15:11:31.139: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 22 15:11:33.130: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 22 15:11:33.136: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 22 15:11:35.129: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 22 15:11:35.135: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 22 15:11:37.129: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 22 15:11:37.138: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 15:11:37.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-8460" for this suite.
Feb 22 15:11:59.198: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 15:11:59.324: INFO: namespace container-lifecycle-hook-8460 deletion completed in 22.147806022s

• [SLOW TEST:72.540 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 15:11:59.325: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Feb 22 15:12:09.697: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 15:12:09.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-444" for this suite.
Feb 22 15:12:15.759: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 15:12:15.917: INFO: namespace container-runtime-444 deletion completed in 6.174283891s

• [SLOW TEST:16.592 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 15:12:15.917: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test substitution in container's command
Feb 22 15:12:16.062: INFO: Waiting up to 5m0s for pod "var-expansion-e5754ebc-835f-42e4-898d-f176203021ee" in namespace "var-expansion-4946" to be "success or failure"
Feb 22 15:12:16.161: INFO: Pod "var-expansion-e5754ebc-835f-42e4-898d-f176203021ee": Phase="Pending", Reason="", readiness=false. Elapsed: 98.18508ms
Feb 22 15:12:18.191: INFO: Pod "var-expansion-e5754ebc-835f-42e4-898d-f176203021ee": Phase="Pending", Reason="", readiness=false. Elapsed: 2.12895527s
Feb 22 15:12:20.198: INFO: Pod "var-expansion-e5754ebc-835f-42e4-898d-f176203021ee": Phase="Pending", Reason="", readiness=false. Elapsed: 4.135512958s
Feb 22 15:12:22.209: INFO: Pod "var-expansion-e5754ebc-835f-42e4-898d-f176203021ee": Phase="Pending", Reason="", readiness=false. Elapsed: 6.146655507s
Feb 22 15:12:24.521: INFO: Pod "var-expansion-e5754ebc-835f-42e4-898d-f176203021ee": Phase="Pending", Reason="", readiness=false. Elapsed: 8.458555557s
Feb 22 15:12:26.535: INFO: Pod "var-expansion-e5754ebc-835f-42e4-898d-f176203021ee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.472800962s
STEP: Saw pod success
Feb 22 15:12:26.536: INFO: Pod "var-expansion-e5754ebc-835f-42e4-898d-f176203021ee" satisfied condition "success or failure"
Feb 22 15:12:26.541: INFO: Trying to get logs from node iruya-node pod var-expansion-e5754ebc-835f-42e4-898d-f176203021ee container dapi-container: 
STEP: delete the pod
Feb 22 15:12:26.655: INFO: Waiting for pod var-expansion-e5754ebc-835f-42e4-898d-f176203021ee to disappear
Feb 22 15:12:26.664: INFO: Pod var-expansion-e5754ebc-835f-42e4-898d-f176203021ee no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 15:12:26.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-4946" for this suite.
Feb 22 15:12:32.686: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 15:12:32.811: INFO: namespace var-expansion-4946 deletion completed in 6.140224386s

• [SLOW TEST:16.893 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 15:12:32.811: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-2510
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb 22 15:12:32.920: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Feb 22 15:13:13.071: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:pod-network-test-2510 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 22 15:13:13.071: INFO: >>> kubeConfig: /root/.kube/config
I0222 15:13:13.134810       8 log.go:172] (0xc0018f6d10) (0xc00156b180) Create stream
I0222 15:13:13.135041       8 log.go:172] (0xc0018f6d10) (0xc00156b180) Stream added, broadcasting: 1
I0222 15:13:13.144004       8 log.go:172] (0xc0018f6d10) Reply frame received for 1
I0222 15:13:13.144122       8 log.go:172] (0xc0018f6d10) (0xc0011d4d20) Create stream
I0222 15:13:13.144140       8 log.go:172] (0xc0018f6d10) (0xc0011d4d20) Stream added, broadcasting: 3
I0222 15:13:13.146309       8 log.go:172] (0xc0018f6d10) Reply frame received for 3
I0222 15:13:13.146344       8 log.go:172] (0xc0018f6d10) (0xc00156b220) Create stream
I0222 15:13:13.146358       8 log.go:172] (0xc0018f6d10) (0xc00156b220) Stream added, broadcasting: 5
I0222 15:13:13.148484       8 log.go:172] (0xc0018f6d10) Reply frame received for 5
I0222 15:13:14.328405       8 log.go:172] (0xc0018f6d10) Data frame received for 3
I0222 15:13:14.328666       8 log.go:172] (0xc0011d4d20) (3) Data frame handling
I0222 15:13:14.328760       8 log.go:172] (0xc0011d4d20) (3) Data frame sent
I0222 15:13:14.592159       8 log.go:172] (0xc0018f6d10) Data frame received for 1
I0222 15:13:14.592285       8 log.go:172] (0xc0018f6d10) (0xc00156b220) Stream removed, broadcasting: 5
I0222 15:13:14.592449       8 log.go:172] (0xc00156b180) (1) Data frame handling
I0222 15:13:14.592514       8 log.go:172] (0xc00156b180) (1) Data frame sent
I0222 15:13:14.592602       8 log.go:172] (0xc0018f6d10) (0xc0011d4d20) Stream removed, broadcasting: 3
I0222 15:13:14.592686       8 log.go:172] (0xc0018f6d10) (0xc00156b180) Stream removed, broadcasting: 1
I0222 15:13:14.592758       8 log.go:172] (0xc0018f6d10) Go away received
I0222 15:13:14.593233       8 log.go:172] (0xc0018f6d10) (0xc00156b180) Stream removed, broadcasting: 1
I0222 15:13:14.593283       8 log.go:172] (0xc0018f6d10) (0xc0011d4d20) Stream removed, broadcasting: 3
I0222 15:13:14.593306       8 log.go:172] (0xc0018f6d10) (0xc00156b220) Stream removed, broadcasting: 5
Feb 22 15:13:14.593: INFO: Found all expected endpoints: [netserver-0]
Feb 22 15:13:14.605: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.44.0.1 8081 | grep -v '^\s*$'] Namespace:pod-network-test-2510 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 22 15:13:14.605: INFO: >>> kubeConfig: /root/.kube/config
I0222 15:13:14.679309       8 log.go:172] (0xc0018f7a20) (0xc00156b680) Create stream
I0222 15:13:14.679377       8 log.go:172] (0xc0018f7a20) (0xc00156b680) Stream added, broadcasting: 1
I0222 15:13:14.685571       8 log.go:172] (0xc0018f7a20) Reply frame received for 1
I0222 15:13:14.685626       8 log.go:172] (0xc0018f7a20) (0xc003218b40) Create stream
I0222 15:13:14.685649       8 log.go:172] (0xc0018f7a20) (0xc003218b40) Stream added, broadcasting: 3
I0222 15:13:14.687485       8 log.go:172] (0xc0018f7a20) Reply frame received for 3
I0222 15:13:14.687509       8 log.go:172] (0xc0018f7a20) (0xc003218be0) Create stream
I0222 15:13:14.687517       8 log.go:172] (0xc0018f7a20) (0xc003218be0) Stream added, broadcasting: 5
I0222 15:13:14.688778       8 log.go:172] (0xc0018f7a20) Reply frame received for 5
I0222 15:13:15.792758       8 log.go:172] (0xc0018f7a20) Data frame received for 3
I0222 15:13:15.793058       8 log.go:172] (0xc003218b40) (3) Data frame handling
I0222 15:13:15.793091       8 log.go:172] (0xc003218b40) (3) Data frame sent
I0222 15:13:16.000754       8 log.go:172] (0xc0018f7a20) (0xc003218be0) Stream removed, broadcasting: 5
I0222 15:13:16.000930       8 log.go:172] (0xc0018f7a20) Data frame received for 1
I0222 15:13:16.001015       8 log.go:172] (0xc0018f7a20) (0xc003218b40) Stream removed, broadcasting: 3
I0222 15:13:16.001102       8 log.go:172] (0xc00156b680) (1) Data frame handling
I0222 15:13:16.001137       8 log.go:172] (0xc00156b680) (1) Data frame sent
I0222 15:13:16.001155       8 log.go:172] (0xc0018f7a20) (0xc00156b680) Stream removed, broadcasting: 1
I0222 15:13:16.001176       8 log.go:172] (0xc0018f7a20) Go away received
I0222 15:13:16.001807       8 log.go:172] (0xc0018f7a20) (0xc00156b680) Stream removed, broadcasting: 1
I0222 15:13:16.001847       8 log.go:172] (0xc0018f7a20) (0xc003218b40) Stream removed, broadcasting: 3
I0222 15:13:16.001875       8 log.go:172] (0xc0018f7a20) (0xc003218be0) Stream removed, broadcasting: 5
Feb 22 15:13:16.001: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 15:13:16.002: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-2510" for this suite.
Feb 22 15:13:40.078: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 15:13:40.182: INFO: namespace pod-network-test-2510 deletion completed in 24.165570203s

• [SLOW TEST:67.371 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 15:13:40.183: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on node default medium
Feb 22 15:13:40.302: INFO: Waiting up to 5m0s for pod "pod-7f21c9bc-c7f0-409a-8382-56df8033d781" in namespace "emptydir-2057" to be "success or failure"
Feb 22 15:13:40.331: INFO: Pod "pod-7f21c9bc-c7f0-409a-8382-56df8033d781": Phase="Pending", Reason="", readiness=false. Elapsed: 29.177717ms
Feb 22 15:13:42.340: INFO: Pod "pod-7f21c9bc-c7f0-409a-8382-56df8033d781": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037740516s
Feb 22 15:13:44.347: INFO: Pod "pod-7f21c9bc-c7f0-409a-8382-56df8033d781": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045172997s
Feb 22 15:13:46.458: INFO: Pod "pod-7f21c9bc-c7f0-409a-8382-56df8033d781": Phase="Pending", Reason="", readiness=false. Elapsed: 6.15542887s
Feb 22 15:13:48.473: INFO: Pod "pod-7f21c9bc-c7f0-409a-8382-56df8033d781": Phase="Pending", Reason="", readiness=false. Elapsed: 8.170996906s
Feb 22 15:13:50.486: INFO: Pod "pod-7f21c9bc-c7f0-409a-8382-56df8033d781": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.184313251s
STEP: Saw pod success
Feb 22 15:13:50.487: INFO: Pod "pod-7f21c9bc-c7f0-409a-8382-56df8033d781" satisfied condition "success or failure"
Feb 22 15:13:50.493: INFO: Trying to get logs from node iruya-node pod pod-7f21c9bc-c7f0-409a-8382-56df8033d781 container test-container: 
STEP: delete the pod
Feb 22 15:13:51.751: INFO: Waiting for pod pod-7f21c9bc-c7f0-409a-8382-56df8033d781 to disappear
Feb 22 15:13:51.760: INFO: Pod pod-7f21c9bc-c7f0-409a-8382-56df8033d781 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 15:13:51.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2057" for this suite.
Feb 22 15:13:57.907: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 15:13:58.080: INFO: namespace emptydir-2057 deletion completed in 6.276855479s

• [SLOW TEST:17.898 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 22 15:13:58.082: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on tmpfs
Feb 22 15:13:58.235: INFO: Waiting up to 5m0s for pod "pod-28d6059b-7ae3-4c87-9e39-33afc415b2e4" in namespace "emptydir-6772" to be "success or failure"
Feb 22 15:13:58.248: INFO: Pod "pod-28d6059b-7ae3-4c87-9e39-33afc415b2e4": Phase="Pending", Reason="", readiness=false. Elapsed: 13.321524ms
Feb 22 15:14:00.264: INFO: Pod "pod-28d6059b-7ae3-4c87-9e39-33afc415b2e4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028989215s
Feb 22 15:14:02.274: INFO: Pod "pod-28d6059b-7ae3-4c87-9e39-33afc415b2e4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038625695s
Feb 22 15:14:04.284: INFO: Pod "pod-28d6059b-7ae3-4c87-9e39-33afc415b2e4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.04903257s
Feb 22 15:14:06.327: INFO: Pod "pod-28d6059b-7ae3-4c87-9e39-33afc415b2e4": Phase="Running", Reason="", readiness=true. Elapsed: 8.09181883s
Feb 22 15:14:08.340: INFO: Pod "pod-28d6059b-7ae3-4c87-9e39-33afc415b2e4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.10536693s
STEP: Saw pod success
Feb 22 15:14:08.341: INFO: Pod "pod-28d6059b-7ae3-4c87-9e39-33afc415b2e4" satisfied condition "success or failure"
Feb 22 15:14:08.346: INFO: Trying to get logs from node iruya-node pod pod-28d6059b-7ae3-4c87-9e39-33afc415b2e4 container test-container: 
STEP: delete the pod
Feb 22 15:14:08.874: INFO: Waiting for pod pod-28d6059b-7ae3-4c87-9e39-33afc415b2e4 to disappear
Feb 22 15:14:08.888: INFO: Pod pod-28d6059b-7ae3-4c87-9e39-33afc415b2e4 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 22 15:14:08.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6772" for this suite.
Feb 22 15:14:14.921: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 22 15:14:15.040: INFO: namespace emptydir-6772 deletion completed in 6.147094971s

• [SLOW TEST:16.958 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSFeb 22 15:14:15.041: INFO: Running AfterSuite actions on all nodes
Feb 22 15:14:15.041: INFO: Running AfterSuite actions on node 1
Feb 22 15:14:15.041: INFO: Skipping dumping logs from cluster

Ran 215 of 4412 Specs in 8283.443 seconds
SUCCESS! -- 215 Passed | 0 Failed | 0 Pending | 4197 Skipped
PASS