I0104 13:56:48.691237 8 e2e.go:243] Starting e2e run "c349832e-9863-4cd2-b91e-ce17260d342c" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1578146207 - Will randomize all specs Will run 215 of 4412 specs Jan 4 13:56:48.988: INFO: >>> kubeConfig: /root/.kube/config Jan 4 13:56:48.991: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jan 4 13:56:49.018: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jan 4 13:56:49.044: INFO: 10 / 10 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jan 4 13:56:49.044: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Jan 4 13:56:49.044: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jan 4 13:56:49.061: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jan 4 13:56:49.061: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Jan 4 13:56:49.061: INFO: e2e test version: v1.15.7 Jan 4 13:56:49.064: INFO: kube-apiserver version: v1.15.1 SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 13:56:49.065: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap Jan 4 13:56:49.188: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-upd-9f89ec9b-b5d5-4d7d-958c-fbdad2caeeb2 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 13:57:05.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7685" for this suite. Jan 4 13:57:27.483: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 13:57:27.554: INFO: namespace configmap-7685 deletion completed in 22.120536949s • [SLOW TEST:38.489 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 13:57:27.554: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 4 13:57:27.730: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Jan 4 13:57:32.780: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jan 4 13:57:36.793: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Jan 4 13:57:36.856: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:deployment-2679,SelfLink:/apis/apps/v1/namespaces/deployment-2679/deployments/test-cleanup-deployment,UID:ac2b4d92-5adb-44eb-bc84-a032fd307b29,ResourceVersion:19273316,Generation:1,CreationTimestamp:2020-01-04 13:57:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},} Jan 4 13:57:36.898: INFO: New ReplicaSet "test-cleanup-deployment-55bbcbc84c" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c,GenerateName:,Namespace:deployment-2679,SelfLink:/apis/apps/v1/namespaces/deployment-2679/replicasets/test-cleanup-deployment-55bbcbc84c,UID:deaddaa7-cc5f-4daf-8cfe-f5b28f81fd99,ResourceVersion:19273319,Generation:1,CreationTimestamp:2020-01-04 13:57:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment ac2b4d92-5adb-44eb-bc84-a032fd307b29 0xc002ebf7e7 0xc002ebf7e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jan 4 13:57:36.898: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Jan 4 13:57:36.898: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:deployment-2679,SelfLink:/apis/apps/v1/namespaces/deployment-2679/replicasets/test-cleanup-controller,UID:605cd415-ff00-4193-989c-afe929842274,ResourceVersion:19273318,Generation:1,CreationTimestamp:2020-01-04 13:57:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment ac2b4d92-5adb-44eb-bc84-a032fd307b29 0xc002ebf717 0xc002ebf718}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Jan 4 13:57:37.006: INFO: Pod "test-cleanup-controller-np5b4" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-np5b4,GenerateName:test-cleanup-controller-,Namespace:deployment-2679,SelfLink:/api/v1/namespaces/deployment-2679/pods/test-cleanup-controller-np5b4,UID:8b0c5bf9-c419-4bd4-8c55-c28de335793b,ResourceVersion:19273314,Generation:0,CreationTimestamp:2020-01-04 13:57:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller 605cd415-ff00-4193-989c-afe929842274 0xc002bd20c7 0xc002bd20c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-24dn9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-24dn9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-24dn9 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002bd2140} {node.kubernetes.io/unreachable Exists NoExecute 0xc002bd2160}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 13:57:27 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 13:57:36 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 13:57:36 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 13:57:27 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2020-01-04 13:57:27 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-04 13:57:35 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://aa0810cbd61830c1a608323bc8a488e8915d8b017d512d65db8c1b6ae31694cf}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 4 13:57:37.006: INFO: Pod "test-cleanup-deployment-55bbcbc84c-lxs8z" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c-lxs8z,GenerateName:test-cleanup-deployment-55bbcbc84c-,Namespace:deployment-2679,SelfLink:/api/v1/namespaces/deployment-2679/pods/test-cleanup-deployment-55bbcbc84c-lxs8z,UID:11398598-b0bb-44ea-a4aa-6da0b84e444e,ResourceVersion:19273325,Generation:0,CreationTimestamp:2020-01-04 13:57:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-deployment-55bbcbc84c deaddaa7-cc5f-4daf-8cfe-f5b28f81fd99 0xc002bd2247 0xc002bd2248}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-24dn9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-24dn9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-24dn9 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002bd22c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002bd22e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 13:57:36 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 13:57:37.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-2679" for this suite. Jan 4 13:57:45.082: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 13:57:45.220: INFO: namespace deployment-2679 deletion completed in 8.205765018s • [SLOW TEST:17.666 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 13:57:45.221: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 4 13:57:45.359: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3a552a21-1463-42e7-94b5-b3d535ed5f73" in namespace "projected-4827" to be "success or failure" Jan 4 13:57:45.408: INFO: Pod "downwardapi-volume-3a552a21-1463-42e7-94b5-b3d535ed5f73": Phase="Pending", Reason="", readiness=false. Elapsed: 48.675153ms Jan 4 13:57:47.439: INFO: Pod "downwardapi-volume-3a552a21-1463-42e7-94b5-b3d535ed5f73": Phase="Pending", Reason="", readiness=false. Elapsed: 2.078906732s Jan 4 13:57:49.451: INFO: Pod "downwardapi-volume-3a552a21-1463-42e7-94b5-b3d535ed5f73": Phase="Pending", Reason="", readiness=false. Elapsed: 4.091077098s Jan 4 13:57:51.464: INFO: Pod "downwardapi-volume-3a552a21-1463-42e7-94b5-b3d535ed5f73": Phase="Pending", Reason="", readiness=false. Elapsed: 6.104133936s Jan 4 13:57:53.471: INFO: Pod "downwardapi-volume-3a552a21-1463-42e7-94b5-b3d535ed5f73": Phase="Pending", Reason="", readiness=false. Elapsed: 8.111089246s Jan 4 13:57:55.477: INFO: Pod "downwardapi-volume-3a552a21-1463-42e7-94b5-b3d535ed5f73": Phase="Pending", Reason="", readiness=false. Elapsed: 10.117386401s Jan 4 13:57:57.489: INFO: Pod "downwardapi-volume-3a552a21-1463-42e7-94b5-b3d535ed5f73": Phase="Pending", Reason="", readiness=false. Elapsed: 12.129334666s Jan 4 13:57:59.499: INFO: Pod "downwardapi-volume-3a552a21-1463-42e7-94b5-b3d535ed5f73": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.13887003s STEP: Saw pod success Jan 4 13:57:59.499: INFO: Pod "downwardapi-volume-3a552a21-1463-42e7-94b5-b3d535ed5f73" satisfied condition "success or failure" Jan 4 13:57:59.503: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-3a552a21-1463-42e7-94b5-b3d535ed5f73 container client-container: STEP: delete the pod Jan 4 13:57:59.662: INFO: Waiting for pod downwardapi-volume-3a552a21-1463-42e7-94b5-b3d535ed5f73 to disappear Jan 4 13:57:59.667: INFO: Pod downwardapi-volume-3a552a21-1463-42e7-94b5-b3d535ed5f73 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 13:57:59.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4827" for this suite. Jan 4 13:58:05.724: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 13:58:05.825: INFO: namespace projected-4827 deletion completed in 6.139787678s • [SLOW TEST:20.604 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 13:58:05.825: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-a0d6e547-94c9-4e40-9e4b-ac9d0b5a4d39 STEP: Creating a pod to test consume secrets Jan 4 13:58:06.134: INFO: Waiting up to 5m0s for pod "pod-secrets-66dbc7bd-f89e-4513-ad0f-1d60f55289ea" in namespace "secrets-9088" to be "success or failure" Jan 4 13:58:06.193: INFO: Pod "pod-secrets-66dbc7bd-f89e-4513-ad0f-1d60f55289ea": Phase="Pending", Reason="", readiness=false. Elapsed: 58.514178ms Jan 4 13:58:08.203: INFO: Pod "pod-secrets-66dbc7bd-f89e-4513-ad0f-1d60f55289ea": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068672724s Jan 4 13:58:10.217: INFO: Pod "pod-secrets-66dbc7bd-f89e-4513-ad0f-1d60f55289ea": Phase="Pending", Reason="", readiness=false. Elapsed: 4.082448605s Jan 4 13:58:12.231: INFO: Pod "pod-secrets-66dbc7bd-f89e-4513-ad0f-1d60f55289ea": Phase="Pending", Reason="", readiness=false. Elapsed: 6.096952863s Jan 4 13:58:14.245: INFO: Pod "pod-secrets-66dbc7bd-f89e-4513-ad0f-1d60f55289ea": Phase="Pending", Reason="", readiness=false. Elapsed: 8.110114384s Jan 4 13:58:16.250: INFO: Pod "pod-secrets-66dbc7bd-f89e-4513-ad0f-1d60f55289ea": Phase="Pending", Reason="", readiness=false. Elapsed: 10.115766101s Jan 4 13:58:18.257: INFO: Pod "pod-secrets-66dbc7bd-f89e-4513-ad0f-1d60f55289ea": Phase="Pending", Reason="", readiness=false. Elapsed: 12.12223466s Jan 4 13:58:20.264: INFO: Pod "pod-secrets-66dbc7bd-f89e-4513-ad0f-1d60f55289ea": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.129546454s STEP: Saw pod success Jan 4 13:58:20.264: INFO: Pod "pod-secrets-66dbc7bd-f89e-4513-ad0f-1d60f55289ea" satisfied condition "success or failure" Jan 4 13:58:20.267: INFO: Trying to get logs from node iruya-node pod pod-secrets-66dbc7bd-f89e-4513-ad0f-1d60f55289ea container secret-volume-test: STEP: delete the pod Jan 4 13:58:20.343: INFO: Waiting for pod pod-secrets-66dbc7bd-f89e-4513-ad0f-1d60f55289ea to disappear Jan 4 13:58:20.360: INFO: Pod pod-secrets-66dbc7bd-f89e-4513-ad0f-1d60f55289ea no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 13:58:20.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9088" for this suite. Jan 4 13:58:26.470: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 13:58:26.587: INFO: namespace secrets-9088 deletion completed in 6.218406491s • [SLOW TEST:20.762 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 13:58:26.588: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on node default medium Jan 4 13:58:26.791: INFO: Waiting up to 5m0s for pod "pod-140d1578-24f4-4fd9-9d65-26e8c5d1fa5c" in namespace "emptydir-5379" to be "success or failure" Jan 4 13:58:26.838: INFO: Pod "pod-140d1578-24f4-4fd9-9d65-26e8c5d1fa5c": Phase="Pending", Reason="", readiness=false. Elapsed: 47.481759ms Jan 4 13:58:28.851: INFO: Pod "pod-140d1578-24f4-4fd9-9d65-26e8c5d1fa5c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060003005s Jan 4 13:58:30.863: INFO: Pod "pod-140d1578-24f4-4fd9-9d65-26e8c5d1fa5c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.071859003s Jan 4 13:58:32.873: INFO: Pod "pod-140d1578-24f4-4fd9-9d65-26e8c5d1fa5c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.082237343s Jan 4 13:58:34.887: INFO: Pod "pod-140d1578-24f4-4fd9-9d65-26e8c5d1fa5c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.096541605s Jan 4 13:58:36.896: INFO: Pod "pod-140d1578-24f4-4fd9-9d65-26e8c5d1fa5c": Phase="Pending", Reason="", readiness=false. Elapsed: 10.104990871s Jan 4 13:58:38.906: INFO: Pod "pod-140d1578-24f4-4fd9-9d65-26e8c5d1fa5c": Phase="Pending", Reason="", readiness=false. Elapsed: 12.11472152s Jan 4 13:58:41.990: INFO: Pod "pod-140d1578-24f4-4fd9-9d65-26e8c5d1fa5c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.199210247s STEP: Saw pod success Jan 4 13:58:41.990: INFO: Pod "pod-140d1578-24f4-4fd9-9d65-26e8c5d1fa5c" satisfied condition "success or failure" Jan 4 13:58:41.996: INFO: Trying to get logs from node iruya-node pod pod-140d1578-24f4-4fd9-9d65-26e8c5d1fa5c container test-container: STEP: delete the pod Jan 4 13:58:42.080: INFO: Waiting for pod pod-140d1578-24f4-4fd9-9d65-26e8c5d1fa5c to disappear Jan 4 13:58:42.175: INFO: Pod pod-140d1578-24f4-4fd9-9d65-26e8c5d1fa5c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 13:58:42.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5379" for this suite. Jan 4 13:58:48.359: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 13:58:48.441: INFO: namespace emptydir-5379 deletion completed in 6.255769046s • [SLOW TEST:21.853 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 13:58:48.442: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 13:59:48.609: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8759" for this suite. Jan 4 14:00:10.647: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 14:00:10.757: INFO: namespace container-probe-8759 deletion completed in 22.139746246s • [SLOW TEST:82.316 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 14:00:10.758: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 4 14:00:10.878: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 14:00:21.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3281" for this suite. Jan 4 14:01:03.459: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 14:01:03.579: INFO: namespace pods-3281 deletion completed in 42.140394617s • [SLOW TEST:52.821 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 14:01:03.580: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-64061bb2-8aff-4a79-a6a2-ac8e37cef1f6 STEP: Creating a pod to test consume secrets Jan 4 14:01:03.812: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-0454f7e7-ece5-449a-a8e4-ccca843a97c7" in namespace "projected-6790" to be "success or failure" Jan 4 14:01:03.820: INFO: Pod "pod-projected-secrets-0454f7e7-ece5-449a-a8e4-ccca843a97c7": Phase="Pending", Reason="", readiness=false. Elapsed: 7.840762ms Jan 4 14:01:05.831: INFO: Pod "pod-projected-secrets-0454f7e7-ece5-449a-a8e4-ccca843a97c7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018270355s Jan 4 14:01:07.842: INFO: Pod "pod-projected-secrets-0454f7e7-ece5-449a-a8e4-ccca843a97c7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028924906s Jan 4 14:01:09.867: INFO: Pod "pod-projected-secrets-0454f7e7-ece5-449a-a8e4-ccca843a97c7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.054757678s Jan 4 14:01:11.905: INFO: Pod "pod-projected-secrets-0454f7e7-ece5-449a-a8e4-ccca843a97c7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.092069729s STEP: Saw pod success Jan 4 14:01:11.905: INFO: Pod "pod-projected-secrets-0454f7e7-ece5-449a-a8e4-ccca843a97c7" satisfied condition "success or failure" Jan 4 14:01:11.909: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-0454f7e7-ece5-449a-a8e4-ccca843a97c7 container projected-secret-volume-test: STEP: delete the pod Jan 4 14:01:12.012: INFO: Waiting for pod pod-projected-secrets-0454f7e7-ece5-449a-a8e4-ccca843a97c7 to disappear Jan 4 14:01:12.028: INFO: Pod pod-projected-secrets-0454f7e7-ece5-449a-a8e4-ccca843a97c7 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 14:01:12.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6790" for this suite. Jan 4 14:01:18.161: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 14:01:18.306: INFO: namespace projected-6790 deletion completed in 6.269113961s • [SLOW TEST:14.727 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 14:01:18.307: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 4 14:01:18.425: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2b811161-6242-4bda-8219-a9872a034c57" in namespace "downward-api-7152" to be "success or failure" Jan 4 14:01:18.431: INFO: Pod "downwardapi-volume-2b811161-6242-4bda-8219-a9872a034c57": Phase="Pending", Reason="", readiness=false. Elapsed: 6.685268ms Jan 4 14:01:20.449: INFO: Pod "downwardapi-volume-2b811161-6242-4bda-8219-a9872a034c57": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024670085s Jan 4 14:01:23.694: INFO: Pod "downwardapi-volume-2b811161-6242-4bda-8219-a9872a034c57": Phase="Pending", Reason="", readiness=false. Elapsed: 5.26914536s Jan 4 14:01:25.704: INFO: Pod "downwardapi-volume-2b811161-6242-4bda-8219-a9872a034c57": Phase="Pending", Reason="", readiness=false. Elapsed: 7.279271995s Jan 4 14:01:27.715: INFO: Pod "downwardapi-volume-2b811161-6242-4bda-8219-a9872a034c57": Phase="Pending", Reason="", readiness=false. Elapsed: 9.290414526s Jan 4 14:01:29.761: INFO: Pod "downwardapi-volume-2b811161-6242-4bda-8219-a9872a034c57": Phase="Pending", Reason="", readiness=false. Elapsed: 11.335731684s Jan 4 14:01:31.774: INFO: Pod "downwardapi-volume-2b811161-6242-4bda-8219-a9872a034c57": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.349039323s STEP: Saw pod success Jan 4 14:01:31.774: INFO: Pod "downwardapi-volume-2b811161-6242-4bda-8219-a9872a034c57" satisfied condition "success or failure" Jan 4 14:01:31.791: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-2b811161-6242-4bda-8219-a9872a034c57 container client-container: STEP: delete the pod Jan 4 14:01:31.944: INFO: Waiting for pod downwardapi-volume-2b811161-6242-4bda-8219-a9872a034c57 to disappear Jan 4 14:01:31.988: INFO: Pod downwardapi-volume-2b811161-6242-4bda-8219-a9872a034c57 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 14:01:31.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7152" for this suite. Jan 4 14:01:38.078: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 14:01:38.173: INFO: namespace downward-api-7152 deletion completed in 6.165491491s • [SLOW TEST:19.866 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 14:01:38.173: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Jan 4 14:01:38.221: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jan 4 14:01:38.233: INFO: Waiting for terminating namespaces to be deleted... Jan 4 14:01:38.236: INFO: Logging pods the kubelet thinks is on node iruya-node before test Jan 4 14:01:38.244: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded) Jan 4 14:01:38.244: INFO: Container kube-proxy ready: true, restart count 0 Jan 4 14:01:38.244: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded) Jan 4 14:01:38.244: INFO: Container weave ready: true, restart count 0 Jan 4 14:01:38.244: INFO: Container weave-npc ready: true, restart count 0 Jan 4 14:01:38.244: INFO: Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test Jan 4 14:01:38.251: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded) Jan 4 14:01:38.251: INFO: Container kube-scheduler ready: true, restart count 12 Jan 4 14:01:38.251: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded) Jan 4 14:01:38.251: INFO: Container coredns ready: true, restart count 0 Jan 4 14:01:38.251: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded) Jan 4 14:01:38.251: INFO: Container etcd ready: true, restart count 0 Jan 4 14:01:38.251: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded) Jan 4 14:01:38.251: INFO: Container weave ready: true, restart count 0 Jan 4 14:01:38.251: INFO: Container weave-npc ready: true, restart count 0 Jan 4 14:01:38.251: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded) Jan 4 14:01:38.251: INFO: Container coredns ready: true, restart count 0 Jan 4 14:01:38.251: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded) Jan 4 14:01:38.251: INFO: Container kube-controller-manager ready: true, restart count 17 Jan 4 14:01:38.251: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded) Jan 4 14:01:38.251: INFO: Container kube-proxy ready: true, restart count 0 Jan 4 14:01:38.251: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded) Jan 4 14:01:38.251: INFO: Container kube-apiserver ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-b64ba215-aaa1-402c-8c1b-aa2df049218b 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-b64ba215-aaa1-402c-8c1b-aa2df049218b off the node iruya-node STEP: verifying the node doesn't have the label kubernetes.io/e2e-b64ba215-aaa1-402c-8c1b-aa2df049218b [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 14:02:00.769: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-4931" for this suite. Jan 4 14:02:14.811: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 14:02:14.904: INFO: namespace sched-pred-4931 deletion completed in 14.12726726s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:36.731 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 14:02:14.904: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-4f1338a3-593c-4b18-ab53-41f51f63cef4 STEP: Creating configMap with name cm-test-opt-upd-fc6d1211-f17d-441f-8320-cafedf6290a9 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-4f1338a3-593c-4b18-ab53-41f51f63cef4 STEP: Updating configmap cm-test-opt-upd-fc6d1211-f17d-441f-8320-cafedf6290a9 STEP: Creating configMap with name cm-test-opt-create-03528ee0-b9e2-4c60-aaeb-88c607bffb16 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 14:03:49.377: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9798" for this suite. Jan 4 14:04:13.416: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 14:04:13.530: INFO: namespace configmap-9798 deletion completed in 24.147164175s • [SLOW TEST:118.626 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 14:04:13.531: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-downwardapi-cmv5 STEP: Creating a pod to test atomic-volume-subpath Jan 4 14:04:13.812: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-cmv5" in namespace "subpath-2383" to be "success or failure" Jan 4 14:04:13.831: INFO: Pod "pod-subpath-test-downwardapi-cmv5": Phase="Pending", Reason="", readiness=false. Elapsed: 19.010148ms Jan 4 14:04:15.845: INFO: Pod "pod-subpath-test-downwardapi-cmv5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032718814s Jan 4 14:04:17.861: INFO: Pod "pod-subpath-test-downwardapi-cmv5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.049222026s Jan 4 14:04:19.881: INFO: Pod "pod-subpath-test-downwardapi-cmv5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.069007229s Jan 4 14:04:21.895: INFO: Pod "pod-subpath-test-downwardapi-cmv5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.083215357s Jan 4 14:04:23.908: INFO: Pod "pod-subpath-test-downwardapi-cmv5": Phase="Pending", Reason="", readiness=false. Elapsed: 10.095961666s Jan 4 14:04:25.930: INFO: Pod "pod-subpath-test-downwardapi-cmv5": Phase="Pending", Reason="", readiness=false. Elapsed: 12.117841132s Jan 4 14:04:27.940: INFO: Pod "pod-subpath-test-downwardapi-cmv5": Phase="Running", Reason="", readiness=true. Elapsed: 14.12827022s Jan 4 14:04:29.960: INFO: Pod "pod-subpath-test-downwardapi-cmv5": Phase="Running", Reason="", readiness=true. Elapsed: 16.147730406s Jan 4 14:04:31.970: INFO: Pod "pod-subpath-test-downwardapi-cmv5": Phase="Running", Reason="", readiness=true. Elapsed: 18.157719663s Jan 4 14:04:33.975: INFO: Pod "pod-subpath-test-downwardapi-cmv5": Phase="Running", Reason="", readiness=true. Elapsed: 20.163152852s Jan 4 14:04:35.987: INFO: Pod "pod-subpath-test-downwardapi-cmv5": Phase="Running", Reason="", readiness=true. Elapsed: 22.174666469s Jan 4 14:04:37.994: INFO: Pod "pod-subpath-test-downwardapi-cmv5": Phase="Running", Reason="", readiness=true. Elapsed: 24.182333076s Jan 4 14:04:40.005: INFO: Pod "pod-subpath-test-downwardapi-cmv5": Phase="Running", Reason="", readiness=true. Elapsed: 26.192863015s Jan 4 14:04:42.015: INFO: Pod "pod-subpath-test-downwardapi-cmv5": Phase="Running", Reason="", readiness=true. Elapsed: 28.203206913s Jan 4 14:04:44.028: INFO: Pod "pod-subpath-test-downwardapi-cmv5": Phase="Running", Reason="", readiness=true. Elapsed: 30.216191323s Jan 4 14:04:46.041: INFO: Pod "pod-subpath-test-downwardapi-cmv5": Phase="Running", Reason="", readiness=true. Elapsed: 32.229023041s Jan 4 14:04:48.055: INFO: Pod "pod-subpath-test-downwardapi-cmv5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 34.242604868s STEP: Saw pod success Jan 4 14:04:48.055: INFO: Pod "pod-subpath-test-downwardapi-cmv5" satisfied condition "success or failure" Jan 4 14:04:48.061: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-downwardapi-cmv5 container test-container-subpath-downwardapi-cmv5: STEP: delete the pod Jan 4 14:04:48.233: INFO: Waiting for pod pod-subpath-test-downwardapi-cmv5 to disappear Jan 4 14:04:48.241: INFO: Pod pod-subpath-test-downwardapi-cmv5 no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-cmv5 Jan 4 14:04:48.241: INFO: Deleting pod "pod-subpath-test-downwardapi-cmv5" in namespace "subpath-2383" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 14:04:48.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-2383" for this suite. Jan 4 14:04:54.399: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 14:04:54.513: INFO: namespace subpath-2383 deletion completed in 6.263213934s • [SLOW TEST:40.983 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 14:04:54.514: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-4b2187f2-d7a1-4b52-a814-d86bf7c801c4 STEP: Creating a pod to test consume secrets Jan 4 14:04:54.801: INFO: Waiting up to 5m0s for pod "pod-secrets-8e8f6de1-9912-485f-a02f-9890096c0dd2" in namespace "secrets-2260" to be "success or failure" Jan 4 14:04:54.809: INFO: Pod "pod-secrets-8e8f6de1-9912-485f-a02f-9890096c0dd2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.639929ms Jan 4 14:04:56.825: INFO: Pod "pod-secrets-8e8f6de1-9912-485f-a02f-9890096c0dd2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024181367s Jan 4 14:04:58.885: INFO: Pod "pod-secrets-8e8f6de1-9912-485f-a02f-9890096c0dd2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.084106878s Jan 4 14:05:00.902: INFO: Pod "pod-secrets-8e8f6de1-9912-485f-a02f-9890096c0dd2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.101024804s Jan 4 14:05:02.949: INFO: Pod "pod-secrets-8e8f6de1-9912-485f-a02f-9890096c0dd2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.148184548s Jan 4 14:05:05.009: INFO: Pod "pod-secrets-8e8f6de1-9912-485f-a02f-9890096c0dd2": Phase="Pending", Reason="", readiness=false. Elapsed: 10.207831913s Jan 4 14:05:07.028: INFO: Pod "pod-secrets-8e8f6de1-9912-485f-a02f-9890096c0dd2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.227300922s STEP: Saw pod success Jan 4 14:05:07.028: INFO: Pod "pod-secrets-8e8f6de1-9912-485f-a02f-9890096c0dd2" satisfied condition "success or failure" Jan 4 14:05:07.037: INFO: Trying to get logs from node iruya-node pod pod-secrets-8e8f6de1-9912-485f-a02f-9890096c0dd2 container secret-volume-test: STEP: delete the pod Jan 4 14:05:07.212: INFO: Waiting for pod pod-secrets-8e8f6de1-9912-485f-a02f-9890096c0dd2 to disappear Jan 4 14:05:07.218: INFO: Pod pod-secrets-8e8f6de1-9912-485f-a02f-9890096c0dd2 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 14:05:07.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2260" for this suite. Jan 4 14:05:13.251: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 14:05:13.386: INFO: namespace secrets-2260 deletion completed in 6.162668924s • [SLOW TEST:18.872 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 14:05:13.387: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jan 4 14:05:25.066: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 14:05:25.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-1658" for this suite. Jan 4 14:05:31.161: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 14:05:31.430: INFO: namespace container-runtime-1658 deletion completed in 6.32617003s • [SLOW TEST:18.043 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 14:05:31.430: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jan 4 14:05:42.867: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 14:05:42.990: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9211" for this suite. Jan 4 14:05:49.046: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 14:05:49.152: INFO: namespace container-runtime-9211 deletion completed in 6.141388335s • [SLOW TEST:17.722 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 14:05:49.153: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Starting the proxy Jan 4 14:05:49.240: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix235962759/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 14:05:49.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-180" for this suite. Jan 4 14:05:55.396: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 14:05:55.517: INFO: namespace kubectl-180 deletion completed in 6.166618172s • [SLOW TEST:6.364 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 14:05:55.517: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 4 14:05:55.610: INFO: Creating ReplicaSet my-hostname-basic-b2463795-a81e-4e47-872c-d384ffced6fc Jan 4 14:05:55.621: INFO: Pod name my-hostname-basic-b2463795-a81e-4e47-872c-d384ffced6fc: Found 0 pods out of 1 Jan 4 14:06:00.632: INFO: Pod name my-hostname-basic-b2463795-a81e-4e47-872c-d384ffced6fc: Found 1 pods out of 1 Jan 4 14:06:00.632: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-b2463795-a81e-4e47-872c-d384ffced6fc" is running Jan 4 14:06:04.646: INFO: Pod "my-hostname-basic-b2463795-a81e-4e47-872c-d384ffced6fc-trrtt" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-04 14:05:55 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-04 14:05:55 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-b2463795-a81e-4e47-872c-d384ffced6fc]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-04 14:05:55 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-b2463795-a81e-4e47-872c-d384ffced6fc]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-04 14:05:55 +0000 UTC Reason: Message:}]) Jan 4 14:06:04.646: INFO: Trying to dial the pod Jan 4 14:06:09.753: INFO: Controller my-hostname-basic-b2463795-a81e-4e47-872c-d384ffced6fc: Got expected result from replica 1 [my-hostname-basic-b2463795-a81e-4e47-872c-d384ffced6fc-trrtt]: "my-hostname-basic-b2463795-a81e-4e47-872c-d384ffced6fc-trrtt", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 14:06:09.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-3858" for this suite. Jan 4 14:06:15.812: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 14:06:16.011: INFO: namespace replicaset-3858 deletion completed in 6.247711601s • [SLOW TEST:20.494 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 14:06:16.012: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-fa818f9b-7929-49ee-9e10-3329ad4f34d5 STEP: Creating a pod to test consume configMaps Jan 4 14:06:16.168: INFO: Waiting up to 5m0s for pod "pod-configmaps-91441748-c404-4f6d-9aed-a72f0f4b42ca" in namespace "configmap-6014" to be "success or failure" Jan 4 14:06:16.190: INFO: Pod "pod-configmaps-91441748-c404-4f6d-9aed-a72f0f4b42ca": Phase="Pending", Reason="", readiness=false. Elapsed: 22.013385ms Jan 4 14:06:18.199: INFO: Pod "pod-configmaps-91441748-c404-4f6d-9aed-a72f0f4b42ca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03058075s Jan 4 14:06:20.206: INFO: Pod "pod-configmaps-91441748-c404-4f6d-9aed-a72f0f4b42ca": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038443094s Jan 4 14:06:22.212: INFO: Pod "pod-configmaps-91441748-c404-4f6d-9aed-a72f0f4b42ca": Phase="Pending", Reason="", readiness=false. Elapsed: 6.044254461s Jan 4 14:06:24.221: INFO: Pod "pod-configmaps-91441748-c404-4f6d-9aed-a72f0f4b42ca": Phase="Pending", Reason="", readiness=false. Elapsed: 8.052904772s Jan 4 14:06:26.229: INFO: Pod "pod-configmaps-91441748-c404-4f6d-9aed-a72f0f4b42ca": Phase="Pending", Reason="", readiness=false. Elapsed: 10.061409617s Jan 4 14:06:28.238: INFO: Pod "pod-configmaps-91441748-c404-4f6d-9aed-a72f0f4b42ca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.069726685s STEP: Saw pod success Jan 4 14:06:28.238: INFO: Pod "pod-configmaps-91441748-c404-4f6d-9aed-a72f0f4b42ca" satisfied condition "success or failure" Jan 4 14:06:28.245: INFO: Trying to get logs from node iruya-node pod pod-configmaps-91441748-c404-4f6d-9aed-a72f0f4b42ca container configmap-volume-test: STEP: delete the pod Jan 4 14:06:28.471: INFO: Waiting for pod pod-configmaps-91441748-c404-4f6d-9aed-a72f0f4b42ca to disappear Jan 4 14:06:28.486: INFO: Pod pod-configmaps-91441748-c404-4f6d-9aed-a72f0f4b42ca no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 14:06:28.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6014" for this suite. Jan 4 14:06:34.528: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 14:06:34.661: INFO: namespace configmap-6014 deletion completed in 6.166964995s • [SLOW TEST:18.649 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 14:06:34.662: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-5624, will wait for the garbage collector to delete the pods Jan 4 14:06:46.884: INFO: Deleting Job.batch foo took: 18.143549ms Jan 4 14:06:47.184: INFO: Terminating Job.batch foo pods took: 300.578762ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 14:07:36.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-5624" for this suite. Jan 4 14:07:42.843: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 14:07:42.944: INFO: namespace job-5624 deletion completed in 6.131719194s • [SLOW TEST:68.283 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 14:07:42.945: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod test-webserver-af939b41-616f-4d9e-a8b5-0a42eac30a5e in namespace container-probe-8488 Jan 4 14:07:57.115: INFO: Started pod test-webserver-af939b41-616f-4d9e-a8b5-0a42eac30a5e in namespace container-probe-8488 STEP: checking the pod's current state and verifying that restartCount is present Jan 4 14:07:57.119: INFO: Initial restart count of pod test-webserver-af939b41-616f-4d9e-a8b5-0a42eac30a5e is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 14:11:58.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8488" for this suite. Jan 4 14:12:04.953: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 14:12:05.051: INFO: namespace container-probe-8488 deletion completed in 6.126810416s • [SLOW TEST:262.107 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 14:12:05.053: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: executing a command with run --rm and attach with stdin Jan 4 14:12:05.191: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6286 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Jan 4 14:12:20.557: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0104 14:12:18.919621 50 log.go:172] (0xc00002e2c0) (0xc000a56140) Create stream\nI0104 14:12:18.919777 50 log.go:172] (0xc00002e2c0) (0xc000a56140) Stream added, broadcasting: 1\nI0104 14:12:18.938061 50 log.go:172] (0xc00002e2c0) Reply frame received for 1\nI0104 14:12:18.938110 50 log.go:172] (0xc00002e2c0) (0xc0003d1e00) Create stream\nI0104 14:12:18.938123 50 log.go:172] (0xc00002e2c0) (0xc0003d1e00) Stream added, broadcasting: 3\nI0104 14:12:18.967846 50 log.go:172] (0xc00002e2c0) Reply frame received for 3\nI0104 14:12:18.967929 50 log.go:172] (0xc00002e2c0) (0xc0001d8640) Create stream\nI0104 14:12:18.967950 50 log.go:172] (0xc00002e2c0) (0xc0001d8640) Stream added, broadcasting: 5\nI0104 14:12:18.976665 50 log.go:172] (0xc00002e2c0) Reply frame received for 5\nI0104 14:12:18.976698 50 log.go:172] (0xc00002e2c0) (0xc0007c65a0) Create stream\nI0104 14:12:18.976704 50 log.go:172] (0xc00002e2c0) (0xc0007c65a0) Stream added, broadcasting: 7\nI0104 14:12:18.979763 50 log.go:172] (0xc00002e2c0) Reply frame received for 7\nI0104 14:12:18.979898 50 log.go:172] (0xc0003d1e00) (3) Writing data frame\nI0104 14:12:18.980059 50 log.go:172] (0xc0003d1e00) (3) Writing data frame\nI0104 14:12:19.007838 50 log.go:172] (0xc00002e2c0) Data frame received for 5\nI0104 14:12:19.007876 50 log.go:172] (0xc0001d8640) (5) Data frame handling\nI0104 14:12:19.007889 50 log.go:172] (0xc0001d8640) (5) Data frame sent\nI0104 14:12:19.013870 50 log.go:172] (0xc00002e2c0) Data frame received for 5\nI0104 14:12:19.013880 50 log.go:172] (0xc0001d8640) (5) Data frame handling\nI0104 14:12:19.013885 50 log.go:172] (0xc0001d8640) (5) Data frame sent\nI0104 14:12:20.512349 50 log.go:172] (0xc00002e2c0) Data frame received for 1\nI0104 14:12:20.512430 50 log.go:172] (0xc00002e2c0) (0xc0007c65a0) Stream removed, broadcasting: 7\nI0104 14:12:20.512467 50 log.go:172] (0xc000a56140) (1) Data frame handling\nI0104 14:12:20.512494 50 log.go:172] (0xc00002e2c0) (0xc0003d1e00) Stream removed, broadcasting: 3\nI0104 14:12:20.512540 50 log.go:172] (0xc00002e2c0) (0xc0001d8640) Stream removed, broadcasting: 5\nI0104 14:12:20.512579 50 log.go:172] (0xc000a56140) (1) Data frame sent\nI0104 14:12:20.512597 50 log.go:172] (0xc00002e2c0) (0xc000a56140) Stream removed, broadcasting: 1\nI0104 14:12:20.512624 50 log.go:172] (0xc00002e2c0) Go away received\nI0104 14:12:20.512721 50 log.go:172] (0xc00002e2c0) (0xc000a56140) Stream removed, broadcasting: 1\nI0104 14:12:20.512744 50 log.go:172] (0xc00002e2c0) (0xc0003d1e00) Stream removed, broadcasting: 3\nI0104 14:12:20.512760 50 log.go:172] (0xc00002e2c0) (0xc0001d8640) Stream removed, broadcasting: 5\nI0104 14:12:20.512778 50 log.go:172] (0xc00002e2c0) (0xc0007c65a0) Stream removed, broadcasting: 7\n" Jan 4 14:12:20.557: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 14:12:22.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6286" for this suite. Jan 4 14:12:28.599: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 14:12:28.691: INFO: namespace kubectl-6286 deletion completed in 6.118465943s • [SLOW TEST:23.639 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 14:12:28.692: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-458 STEP: creating a selector STEP: Creating the service pods in kubernetes Jan 4 14:12:28.732: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jan 4 14:13:13.200: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.44.0.1:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-458 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 4 14:13:13.200: INFO: >>> kubeConfig: /root/.kube/config I0104 14:13:13.283944 8 log.go:172] (0xc0012ae2c0) (0xc00104e780) Create stream I0104 14:13:13.284040 8 log.go:172] (0xc0012ae2c0) (0xc00104e780) Stream added, broadcasting: 1 I0104 14:13:13.291920 8 log.go:172] (0xc0012ae2c0) Reply frame received for 1 I0104 14:13:13.291965 8 log.go:172] (0xc0012ae2c0) (0xc0011e0c80) Create stream I0104 14:13:13.291978 8 log.go:172] (0xc0012ae2c0) (0xc0011e0c80) Stream added, broadcasting: 3 I0104 14:13:13.295639 8 log.go:172] (0xc0012ae2c0) Reply frame received for 3 I0104 14:13:13.295672 8 log.go:172] (0xc0012ae2c0) (0xc0003881e0) Create stream I0104 14:13:13.295683 8 log.go:172] (0xc0012ae2c0) (0xc0003881e0) Stream added, broadcasting: 5 I0104 14:13:13.297807 8 log.go:172] (0xc0012ae2c0) Reply frame received for 5 I0104 14:13:13.494432 8 log.go:172] (0xc0012ae2c0) Data frame received for 3 I0104 14:13:13.494586 8 log.go:172] (0xc0011e0c80) (3) Data frame handling I0104 14:13:13.494655 8 log.go:172] (0xc0011e0c80) (3) Data frame sent I0104 14:13:13.705310 8 log.go:172] (0xc0012ae2c0) Data frame received for 1 I0104 14:13:13.705548 8 log.go:172] (0xc0012ae2c0) (0xc0011e0c80) Stream removed, broadcasting: 3 I0104 14:13:13.705647 8 log.go:172] (0xc00104e780) (1) Data frame handling I0104 14:13:13.705688 8 log.go:172] (0xc00104e780) (1) Data frame sent I0104 14:13:13.705862 8 log.go:172] (0xc0012ae2c0) (0xc00104e780) Stream removed, broadcasting: 1 I0104 14:13:13.705929 8 log.go:172] (0xc0012ae2c0) (0xc0003881e0) Stream removed, broadcasting: 5 I0104 14:13:13.705949 8 log.go:172] (0xc0012ae2c0) Go away received I0104 14:13:13.706656 8 log.go:172] (0xc0012ae2c0) (0xc00104e780) Stream removed, broadcasting: 1 I0104 14:13:13.706694 8 log.go:172] (0xc0012ae2c0) (0xc0011e0c80) Stream removed, broadcasting: 3 I0104 14:13:13.706706 8 log.go:172] (0xc0012ae2c0) (0xc0003881e0) Stream removed, broadcasting: 5 Jan 4 14:13:13.706: INFO: Found all expected endpoints: [netserver-0] Jan 4 14:13:13.718: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-458 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 4 14:13:13.718: INFO: >>> kubeConfig: /root/.kube/config I0104 14:13:13.797667 8 log.go:172] (0xc00120c790) (0xc000388320) Create stream I0104 14:13:13.798080 8 log.go:172] (0xc00120c790) (0xc000388320) Stream added, broadcasting: 1 I0104 14:13:13.815160 8 log.go:172] (0xc00120c790) Reply frame received for 1 I0104 14:13:13.815304 8 log.go:172] (0xc00120c790) (0xc00104ea00) Create stream I0104 14:13:13.815329 8 log.go:172] (0xc00120c790) (0xc00104ea00) Stream added, broadcasting: 3 I0104 14:13:13.825107 8 log.go:172] (0xc00120c790) Reply frame received for 3 I0104 14:13:13.825479 8 log.go:172] (0xc00120c790) (0xc001f839a0) Create stream I0104 14:13:13.825583 8 log.go:172] (0xc00120c790) (0xc001f839a0) Stream added, broadcasting: 5 I0104 14:13:13.828342 8 log.go:172] (0xc00120c790) Reply frame received for 5 I0104 14:13:14.090952 8 log.go:172] (0xc00120c790) Data frame received for 3 I0104 14:13:14.091034 8 log.go:172] (0xc00104ea00) (3) Data frame handling I0104 14:13:14.091052 8 log.go:172] (0xc00104ea00) (3) Data frame sent I0104 14:13:14.215138 8 log.go:172] (0xc00120c790) Data frame received for 1 I0104 14:13:14.215222 8 log.go:172] (0xc000388320) (1) Data frame handling I0104 14:13:14.215257 8 log.go:172] (0xc000388320) (1) Data frame sent I0104 14:13:14.215543 8 log.go:172] (0xc00120c790) (0xc000388320) Stream removed, broadcasting: 1 I0104 14:13:14.215750 8 log.go:172] (0xc00120c790) (0xc00104ea00) Stream removed, broadcasting: 3 I0104 14:13:14.216125 8 log.go:172] (0xc00120c790) (0xc001f839a0) Stream removed, broadcasting: 5 I0104 14:13:14.216186 8 log.go:172] (0xc00120c790) (0xc000388320) Stream removed, broadcasting: 1 I0104 14:13:14.216223 8 log.go:172] (0xc00120c790) (0xc00104ea00) Stream removed, broadcasting: 3 I0104 14:13:14.216247 8 log.go:172] (0xc00120c790) (0xc001f839a0) Stream removed, broadcasting: 5 I0104 14:13:14.216281 8 log.go:172] (0xc00120c790) Go away received Jan 4 14:13:14.216: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 14:13:14.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-458" for this suite. Jan 4 14:13:38.258: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 14:13:38.370: INFO: namespace pod-network-test-458 deletion completed in 24.145452021s • [SLOW TEST:69.678 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 14:13:38.370: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-zhsg STEP: Creating a pod to test atomic-volume-subpath Jan 4 14:13:38.662: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-zhsg" in namespace "subpath-4405" to be "success or failure" Jan 4 14:13:38.741: INFO: Pod "pod-subpath-test-configmap-zhsg": Phase="Pending", Reason="", readiness=false. Elapsed: 78.243728ms Jan 4 14:13:40.756: INFO: Pod "pod-subpath-test-configmap-zhsg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.093554078s Jan 4 14:13:42.764: INFO: Pod "pod-subpath-test-configmap-zhsg": Phase="Pending", Reason="", readiness=false. Elapsed: 4.10198486s Jan 4 14:13:44.772: INFO: Pod "pod-subpath-test-configmap-zhsg": Phase="Pending", Reason="", readiness=false. Elapsed: 6.109487936s Jan 4 14:13:46.786: INFO: Pod "pod-subpath-test-configmap-zhsg": Phase="Pending", Reason="", readiness=false. Elapsed: 8.123516894s Jan 4 14:13:48.833: INFO: Pod "pod-subpath-test-configmap-zhsg": Phase="Pending", Reason="", readiness=false. Elapsed: 10.170713779s Jan 4 14:13:50.858: INFO: Pod "pod-subpath-test-configmap-zhsg": Phase="Running", Reason="", readiness=true. Elapsed: 12.195408565s Jan 4 14:13:52.872: INFO: Pod "pod-subpath-test-configmap-zhsg": Phase="Running", Reason="", readiness=true. Elapsed: 14.209887203s Jan 4 14:13:54.891: INFO: Pod "pod-subpath-test-configmap-zhsg": Phase="Running", Reason="", readiness=true. Elapsed: 16.228761682s Jan 4 14:13:56.903: INFO: Pod "pod-subpath-test-configmap-zhsg": Phase="Running", Reason="", readiness=true. Elapsed: 18.240349588s Jan 4 14:13:58.921: INFO: Pod "pod-subpath-test-configmap-zhsg": Phase="Running", Reason="", readiness=true. Elapsed: 20.258437949s Jan 4 14:14:00.931: INFO: Pod "pod-subpath-test-configmap-zhsg": Phase="Running", Reason="", readiness=true. Elapsed: 22.269122566s Jan 4 14:14:02.941: INFO: Pod "pod-subpath-test-configmap-zhsg": Phase="Running", Reason="", readiness=true. Elapsed: 24.278714083s Jan 4 14:14:04.958: INFO: Pod "pod-subpath-test-configmap-zhsg": Phase="Running", Reason="", readiness=true. Elapsed: 26.295212594s Jan 4 14:14:06.966: INFO: Pod "pod-subpath-test-configmap-zhsg": Phase="Running", Reason="", readiness=true. Elapsed: 28.304136671s Jan 4 14:14:08.972: INFO: Pod "pod-subpath-test-configmap-zhsg": Phase="Running", Reason="", readiness=true. Elapsed: 30.30994179s Jan 4 14:14:10.980: INFO: Pod "pod-subpath-test-configmap-zhsg": Phase="Running", Reason="", readiness=true. Elapsed: 32.317692381s Jan 4 14:14:12.987: INFO: Pod "pod-subpath-test-configmap-zhsg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 34.324931605s STEP: Saw pod success Jan 4 14:14:12.987: INFO: Pod "pod-subpath-test-configmap-zhsg" satisfied condition "success or failure" Jan 4 14:14:12.990: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-configmap-zhsg container test-container-subpath-configmap-zhsg: STEP: delete the pod Jan 4 14:14:13.063: INFO: Waiting for pod pod-subpath-test-configmap-zhsg to disappear Jan 4 14:14:13.074: INFO: Pod pod-subpath-test-configmap-zhsg no longer exists STEP: Deleting pod pod-subpath-test-configmap-zhsg Jan 4 14:14:13.075: INFO: Deleting pod "pod-subpath-test-configmap-zhsg" in namespace "subpath-4405" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 14:14:13.081: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-4405" for this suite. Jan 4 14:14:19.117: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 14:14:19.242: INFO: namespace subpath-4405 deletion completed in 6.155044425s • [SLOW TEST:40.872 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 14:14:19.243: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-e149aebf-b0dd-4430-919a-10c27de5fe41 STEP: Creating a pod to test consume configMaps Jan 4 14:14:19.457: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e3194ec2-72d6-4608-ae4f-e2fcee3a2e93" in namespace "projected-5657" to be "success or failure" Jan 4 14:14:19.481: INFO: Pod "pod-projected-configmaps-e3194ec2-72d6-4608-ae4f-e2fcee3a2e93": Phase="Pending", Reason="", readiness=false. Elapsed: 23.555068ms Jan 4 14:14:21.491: INFO: Pod "pod-projected-configmaps-e3194ec2-72d6-4608-ae4f-e2fcee3a2e93": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032884582s Jan 4 14:14:23.509: INFO: Pod "pod-projected-configmaps-e3194ec2-72d6-4608-ae4f-e2fcee3a2e93": Phase="Pending", Reason="", readiness=false. Elapsed: 4.051025913s Jan 4 14:14:25.520: INFO: Pod "pod-projected-configmaps-e3194ec2-72d6-4608-ae4f-e2fcee3a2e93": Phase="Pending", Reason="", readiness=false. Elapsed: 6.062403791s Jan 4 14:14:27.533: INFO: Pod "pod-projected-configmaps-e3194ec2-72d6-4608-ae4f-e2fcee3a2e93": Phase="Pending", Reason="", readiness=false. Elapsed: 8.075678078s Jan 4 14:14:29.543: INFO: Pod "pod-projected-configmaps-e3194ec2-72d6-4608-ae4f-e2fcee3a2e93": Phase="Pending", Reason="", readiness=false. Elapsed: 10.084977134s Jan 4 14:14:31.817: INFO: Pod "pod-projected-configmaps-e3194ec2-72d6-4608-ae4f-e2fcee3a2e93": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.359831542s STEP: Saw pod success Jan 4 14:14:31.818: INFO: Pod "pod-projected-configmaps-e3194ec2-72d6-4608-ae4f-e2fcee3a2e93" satisfied condition "success or failure" Jan 4 14:14:32.038: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-e3194ec2-72d6-4608-ae4f-e2fcee3a2e93 container projected-configmap-volume-test: STEP: delete the pod Jan 4 14:14:32.107: INFO: Waiting for pod pod-projected-configmaps-e3194ec2-72d6-4608-ae4f-e2fcee3a2e93 to disappear Jan 4 14:14:32.313: INFO: Pod pod-projected-configmaps-e3194ec2-72d6-4608-ae4f-e2fcee3a2e93 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 14:14:32.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5657" for this suite. Jan 4 14:14:38.391: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 14:14:38.487: INFO: namespace projected-5657 deletion completed in 6.148603252s • [SLOW TEST:19.245 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 14:14:38.488: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1292 STEP: creating an rc Jan 4 14:14:38.604: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2686' Jan 4 14:14:39.016: INFO: stderr: "" Jan 4 14:14:39.016: INFO: stdout: "replicationcontroller/redis-master created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Waiting for Redis master to start. Jan 4 14:14:40.028: INFO: Selector matched 1 pods for map[app:redis] Jan 4 14:14:40.028: INFO: Found 0 / 1 Jan 4 14:14:41.026: INFO: Selector matched 1 pods for map[app:redis] Jan 4 14:14:41.026: INFO: Found 0 / 1 Jan 4 14:14:42.026: INFO: Selector matched 1 pods for map[app:redis] Jan 4 14:14:42.027: INFO: Found 0 / 1 Jan 4 14:14:43.025: INFO: Selector matched 1 pods for map[app:redis] Jan 4 14:14:43.025: INFO: Found 0 / 1 Jan 4 14:14:44.033: INFO: Selector matched 1 pods for map[app:redis] Jan 4 14:14:44.033: INFO: Found 0 / 1 Jan 4 14:14:45.023: INFO: Selector matched 1 pods for map[app:redis] Jan 4 14:14:45.023: INFO: Found 0 / 1 Jan 4 14:14:46.025: INFO: Selector matched 1 pods for map[app:redis] Jan 4 14:14:46.025: INFO: Found 0 / 1 Jan 4 14:14:47.025: INFO: Selector matched 1 pods for map[app:redis] Jan 4 14:14:47.025: INFO: Found 1 / 1 Jan 4 14:14:47.025: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jan 4 14:14:47.031: INFO: Selector matched 1 pods for map[app:redis] Jan 4 14:14:47.031: INFO: ForEach: Found 1 pods from the filter. Now looping through them. STEP: checking for a matching strings Jan 4 14:14:47.031: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-76plt redis-master --namespace=kubectl-2686' Jan 4 14:14:47.249: INFO: stderr: "" Jan 4 14:14:47.250: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 04 Jan 14:14:45.109 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 04 Jan 14:14:45.109 # Server started, Redis version 3.2.12\n1:M 04 Jan 14:14:45.109 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 04 Jan 14:14:45.109 * The server is now ready to accept connections on port 6379\n" STEP: limiting log lines Jan 4 14:14:47.250: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-76plt redis-master --namespace=kubectl-2686 --tail=1' Jan 4 14:14:47.378: INFO: stderr: "" Jan 4 14:14:47.378: INFO: stdout: "1:M 04 Jan 14:14:45.109 * The server is now ready to accept connections on port 6379\n" STEP: limiting log bytes Jan 4 14:14:47.379: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-76plt redis-master --namespace=kubectl-2686 --limit-bytes=1' Jan 4 14:14:47.499: INFO: stderr: "" Jan 4 14:14:47.500: INFO: stdout: " " STEP: exposing timestamps Jan 4 14:14:47.500: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-76plt redis-master --namespace=kubectl-2686 --tail=1 --timestamps' Jan 4 14:14:47.640: INFO: stderr: "" Jan 4 14:14:47.640: INFO: stdout: "2020-01-04T14:14:45.110281503Z 1:M 04 Jan 14:14:45.109 * The server is now ready to accept connections on port 6379\n" STEP: restricting to a time range Jan 4 14:14:50.141: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-76plt redis-master --namespace=kubectl-2686 --since=1s' Jan 4 14:14:50.289: INFO: stderr: "" Jan 4 14:14:50.289: INFO: stdout: "" Jan 4 14:14:50.289: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-76plt redis-master --namespace=kubectl-2686 --since=24h' Jan 4 14:14:50.468: INFO: stderr: "" Jan 4 14:14:50.469: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 04 Jan 14:14:45.109 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 04 Jan 14:14:45.109 # Server started, Redis version 3.2.12\n1:M 04 Jan 14:14:45.109 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 04 Jan 14:14:45.109 * The server is now ready to accept connections on port 6379\n" [AfterEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298 STEP: using delete to clean up resources Jan 4 14:14:50.469: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2686' Jan 4 14:14:50.596: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 4 14:14:50.596: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n" Jan 4 14:14:50.597: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=kubectl-2686' Jan 4 14:14:50.712: INFO: stderr: "No resources found.\n" Jan 4 14:14:50.712: INFO: stdout: "" Jan 4 14:14:50.713: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=kubectl-2686 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 4 14:14:50.812: INFO: stderr: "" Jan 4 14:14:50.812: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 14:14:50.813: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2686" for this suite. Jan 4 14:15:14.872: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 14:15:15.061: INFO: namespace kubectl-2686 deletion completed in 24.235398482s • [SLOW TEST:36.574 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 14:15:15.062: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 4 14:15:15.303: INFO: Creating deployment "nginx-deployment" Jan 4 14:15:15.309: INFO: Waiting for observed generation 1 Jan 4 14:15:19.357: INFO: Waiting for all required pods to come up Jan 4 14:15:19.375: INFO: Pod name nginx: Found 10 pods out of 10 STEP: ensuring each pod is running Jan 4 14:15:55.617: INFO: Waiting for deployment "nginx-deployment" to complete Jan 4 14:15:55.622: INFO: Updating deployment "nginx-deployment" with a non-existent image Jan 4 14:15:55.629: INFO: Updating deployment nginx-deployment Jan 4 14:15:55.629: INFO: Waiting for observed generation 2 Jan 4 14:15:59.114: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Jan 4 14:16:00.132: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Jan 4 14:16:02.663: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Jan 4 14:16:03.623: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Jan 4 14:16:03.623: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Jan 4 14:16:03.845: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Jan 4 14:16:04.077: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas Jan 4 14:16:04.078: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30 Jan 4 14:16:04.246: INFO: Updating deployment nginx-deployment Jan 4 14:16:04.247: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas Jan 4 14:16:07.537: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Jan 4 14:16:07.980: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Jan 4 14:16:15.344: INFO: Deployment "nginx-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:deployment-5816,SelfLink:/apis/apps/v1/namespaces/deployment-5816/deployments/nginx-deployment,UID:f5d8157b-827a-4c85-a261-fda56ead26e4,ResourceVersion:19275738,Generation:3,CreationTimestamp:2020-01-04 14:15:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:25,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Progressing True 2020-01-04 14:16:02 +0000 UTC 2020-01-04 14:15:15 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-55fb7cb77f" is progressing.} {Available False 2020-01-04 14:16:06 +0000 UTC 2020-01-04 14:16:06 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},} Jan 4 14:16:16.871: INFO: New ReplicaSet "nginx-deployment-55fb7cb77f" of Deployment "nginx-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f,GenerateName:,Namespace:deployment-5816,SelfLink:/apis/apps/v1/namespaces/deployment-5816/replicasets/nginx-deployment-55fb7cb77f,UID:c07abc50-a069-4c4c-a7d3-bcb61a56e699,ResourceVersion:19275747,Generation:3,CreationTimestamp:2020-01-04 14:15:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment f5d8157b-827a-4c85-a261-fda56ead26e4 0xc001ca4b77 0xc001ca4b78}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jan 4 14:16:16.872: INFO: All old ReplicaSets of Deployment "nginx-deployment": Jan 4 14:16:16.872: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498,GenerateName:,Namespace:deployment-5816,SelfLink:/apis/apps/v1/namespaces/deployment-5816/replicasets/nginx-deployment-7b8c6f4498,UID:636246a6-e83a-4f70-b422-67cb607f2f17,ResourceVersion:19275728,Generation:3,CreationTimestamp:2020-01-04 14:15:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment f5d8157b-827a-4c85-a261-fda56ead26e4 0xc001ca4c67 0xc001ca4c68}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},} Jan 4 14:16:20.035: INFO: Pod "nginx-deployment-55fb7cb77f-49gl2" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-49gl2,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5816,SelfLink:/api/v1/namespaces/deployment-5816/pods/nginx-deployment-55fb7cb77f-49gl2,UID:c538542b-684c-4bb2-bf72-f8f511415541,ResourceVersion:19275750,Generation:0,CreationTimestamp:2020-01-04 14:16:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f c07abc50-a069-4c4c-a7d3-bcb61a56e699 0xc000930087 0xc000930088}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-ph7rb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ph7rb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-ph7rb true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000930100} {node.kubernetes.io/unreachable Exists NoExecute 0xc0009301a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:16:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:16:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:16:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:16:07 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-01-04 14:16:09 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 4 14:16:20.035: INFO: Pod "nginx-deployment-55fb7cb77f-696gv" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-696gv,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5816,SelfLink:/api/v1/namespaces/deployment-5816/pods/nginx-deployment-55fb7cb77f-696gv,UID:d0b58441-f9af-4fcd-89a1-85844beef8f8,ResourceVersion:19275635,Generation:0,CreationTimestamp:2020-01-04 14:15:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f c07abc50-a069-4c4c-a7d3-bcb61a56e699 0xc000930437 0xc000930438}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-ph7rb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ph7rb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-ph7rb true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0009304a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0009304c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:15:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:15:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:15:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:15:55 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-01-04 14:15:55 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 4 14:16:20.035: INFO: Pod "nginx-deployment-55fb7cb77f-7zxx9" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-7zxx9,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5816,SelfLink:/api/v1/namespaces/deployment-5816/pods/nginx-deployment-55fb7cb77f-7zxx9,UID:9be33930-cfd5-4bfc-afe9-00da4c46f971,ResourceVersion:19275717,Generation:0,CreationTimestamp:2020-01-04 14:16:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f c07abc50-a069-4c4c-a7d3-bcb61a56e699 0xc0009305a7 0xc0009305a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-ph7rb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ph7rb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-ph7rb true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000930620} {node.kubernetes.io/unreachable Exists NoExecute 0xc000930640}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:16:08 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 4 14:16:20.035: INFO: Pod "nginx-deployment-55fb7cb77f-dtpng" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-dtpng,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5816,SelfLink:/api/v1/namespaces/deployment-5816/pods/nginx-deployment-55fb7cb77f-dtpng,UID:d006af8d-d092-4931-b1c1-40e452dc7386,ResourceVersion:19275715,Generation:0,CreationTimestamp:2020-01-04 14:16:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f c07abc50-a069-4c4c-a7d3-bcb61a56e699 0xc0009306c7 0xc0009306c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-ph7rb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ph7rb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-ph7rb true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000930750} {node.kubernetes.io/unreachable Exists NoExecute 0xc000930770}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:16:08 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 4 14:16:20.036: INFO: Pod "nginx-deployment-55fb7cb77f-f8s6t" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-f8s6t,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5816,SelfLink:/api/v1/namespaces/deployment-5816/pods/nginx-deployment-55fb7cb77f-f8s6t,UID:898e9635-5c1c-497d-b3fc-31b968655cce,ResourceVersion:19275751,Generation:0,CreationTimestamp:2020-01-04 14:15:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f c07abc50-a069-4c4c-a7d3-bcb61a56e699 0xc0009307f7 0xc0009307f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-ph7rb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ph7rb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-ph7rb true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000930870} {node.kubernetes.io/unreachable Exists NoExecute 0xc000930890}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:15:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:15:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:15:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:15:55 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.7,StartTime:2020-01-04 14:15:55 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = Error response from daemon: manifest for nginx:404 not found,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 4 14:16:20.036: INFO: Pod "nginx-deployment-55fb7cb77f-fhdt6" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-fhdt6,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5816,SelfLink:/api/v1/namespaces/deployment-5816/pods/nginx-deployment-55fb7cb77f-fhdt6,UID:0dfb7526-8d42-45be-b350-8cf52050e178,ResourceVersion:19275723,Generation:0,CreationTimestamp:2020-01-04 14:16:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f c07abc50-a069-4c4c-a7d3-bcb61a56e699 0xc000930987 0xc000930988}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-ph7rb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ph7rb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-ph7rb true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000930a00} {node.kubernetes.io/unreachable Exists NoExecute 0xc000930a20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:16:08 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 4 14:16:20.036: INFO: Pod "nginx-deployment-55fb7cb77f-fzxbf" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-fzxbf,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5816,SelfLink:/api/v1/namespaces/deployment-5816/pods/nginx-deployment-55fb7cb77f-fzxbf,UID:55f22f2d-cce3-49e6-83cc-a929fa4c4cd8,ResourceVersion:19275729,Generation:0,CreationTimestamp:2020-01-04 14:16:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f c07abc50-a069-4c4c-a7d3-bcb61a56e699 0xc000930aa7 0xc000930aa8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-ph7rb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ph7rb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-ph7rb true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000930b10} {node.kubernetes.io/unreachable Exists NoExecute 0xc000930b30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:16:09 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 4 14:16:20.036: INFO: Pod "nginx-deployment-55fb7cb77f-kzp7b" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-kzp7b,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5816,SelfLink:/api/v1/namespaces/deployment-5816/pods/nginx-deployment-55fb7cb77f-kzp7b,UID:dd8156fc-6b16-43dd-82c5-07d8b15d17d2,ResourceVersion:19275665,Generation:0,CreationTimestamp:2020-01-04 14:15:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f c07abc50-a069-4c4c-a7d3-bcb61a56e699 0xc000930bb7 0xc000930bb8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-ph7rb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ph7rb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-ph7rb true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000930c30} {node.kubernetes.io/unreachable Exists NoExecute 0xc000930c50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:16:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:16:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:16:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:15:59 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-01-04 14:16:00 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 4 14:16:20.036: INFO: Pod "nginx-deployment-55fb7cb77f-lcntv" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-lcntv,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5816,SelfLink:/api/v1/namespaces/deployment-5816/pods/nginx-deployment-55fb7cb77f-lcntv,UID:26082a52-8e8e-4543-a745-63c37f6d50f4,ResourceVersion:19275661,Generation:0,CreationTimestamp:2020-01-04 14:15:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f c07abc50-a069-4c4c-a7d3-bcb61a56e699 0xc000930d47 0xc000930d48}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-ph7rb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ph7rb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-ph7rb true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000930db0} {node.kubernetes.io/unreachable Exists NoExecute 0xc000930dd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:16:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:16:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:16:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:15:59 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-01-04 14:16:00 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 4 14:16:20.036: INFO: Pod "nginx-deployment-55fb7cb77f-rf2c2" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-rf2c2,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5816,SelfLink:/api/v1/namespaces/deployment-5816/pods/nginx-deployment-55fb7cb77f-rf2c2,UID:c8d7f35a-0832-4150-8295-913cbb4edae5,ResourceVersion:19275696,Generation:0,CreationTimestamp:2020-01-04 14:16:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f c07abc50-a069-4c4c-a7d3-bcb61a56e699 0xc000930ea7 0xc000930ea8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-ph7rb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ph7rb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-ph7rb true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000930f30} {node.kubernetes.io/unreachable Exists NoExecute 0xc000930f50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:16:07 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 4 14:16:20.037: INFO: Pod "nginx-deployment-55fb7cb77f-tgcql" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-tgcql,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5816,SelfLink:/api/v1/namespaces/deployment-5816/pods/nginx-deployment-55fb7cb77f-tgcql,UID:6d436e25-bdad-41ee-8d26-28f6be70150e,ResourceVersion:19275762,Generation:0,CreationTimestamp:2020-01-04 14:15:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f c07abc50-a069-4c4c-a7d3-bcb61a56e699 0xc000930fd7 0xc000930fd8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-ph7rb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ph7rb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-ph7rb true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000931060} {node.kubernetes.io/unreachable Exists NoExecute 0xc000931080}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:15:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:15:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:15:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:15:55 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.6,StartTime:2020-01-04 14:15:55 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ImagePullBackOff,Message:Back-off pulling image "nginx:404",} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 4 14:16:20.037: INFO: Pod "nginx-deployment-55fb7cb77f-vvkqx" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-vvkqx,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5816,SelfLink:/api/v1/namespaces/deployment-5816/pods/nginx-deployment-55fb7cb77f-vvkqx,UID:bbc8cfb8-5a78-4137-8f19-31073ae44324,ResourceVersion:19275725,Generation:0,CreationTimestamp:2020-01-04 14:16:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f c07abc50-a069-4c4c-a7d3-bcb61a56e699 0xc000931177 0xc000931178}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-ph7rb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ph7rb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-ph7rb true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0009311e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc000931200}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:16:08 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 4 14:16:20.037: INFO: Pod "nginx-deployment-55fb7cb77f-xcnhn" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-xcnhn,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5816,SelfLink:/api/v1/namespaces/deployment-5816/pods/nginx-deployment-55fb7cb77f-xcnhn,UID:c5d1f261-4a5e-41d5-9f3f-460bf3361fae,ResourceVersion:19275704,Generation:0,CreationTimestamp:2020-01-04 14:16:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f c07abc50-a069-4c4c-a7d3-bcb61a56e699 0xc000931287 0xc000931288}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-ph7rb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ph7rb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-ph7rb true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0009312f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc000931310}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:16:07 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 4 14:16:20.037: INFO: Pod "nginx-deployment-7b8c6f4498-47w8n" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-47w8n,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5816,SelfLink:/api/v1/namespaces/deployment-5816/pods/nginx-deployment-7b8c6f4498-47w8n,UID:f9a32aa2-344c-4ef2-8c9f-56ef805df79e,ResourceVersion:19275714,Generation:0,CreationTimestamp:2020-01-04 14:16:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 636246a6-e83a-4f70-b422-67cb607f2f17 0xc000931397 0xc000931398}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-ph7rb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ph7rb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-ph7rb true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000931410} {node.kubernetes.io/unreachable Exists NoExecute 0xc000931430}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:16:08 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 4 14:16:20.037: INFO: Pod "nginx-deployment-7b8c6f4498-57dnq" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-57dnq,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5816,SelfLink:/api/v1/namespaces/deployment-5816/pods/nginx-deployment-7b8c6f4498-57dnq,UID:e9ce020c-588a-4a91-b279-39d30c93fbe2,ResourceVersion:19275760,Generation:0,CreationTimestamp:2020-01-04 14:16:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 636246a6-e83a-4f70-b422-67cb607f2f17 0xc000931537 0xc000931538}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-ph7rb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ph7rb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-ph7rb true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0009315a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0009315c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:16:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:16:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:16:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:16:07 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-01-04 14:16:10 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 4 14:16:20.037: INFO: Pod "nginx-deployment-7b8c6f4498-5j62v" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-5j62v,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5816,SelfLink:/api/v1/namespaces/deployment-5816/pods/nginx-deployment-7b8c6f4498-5j62v,UID:902433b6-7f4b-4d25-8377-9c123f8d40f7,ResourceVersion:19275570,Generation:0,CreationTimestamp:2020-01-04 14:15:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 636246a6-e83a-4f70-b422-67cb607f2f17 0xc000931687 0xc000931688}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-ph7rb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ph7rb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-ph7rb true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000931700} {node.kubernetes.io/unreachable Exists NoExecute 0xc000931720}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:15:15 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:15:49 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:15:49 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:15:15 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2020-01-04 14:15:15 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-04 14:15:45 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://e0d7f14cbc753138dcc2ecc3e0f8dfb64574a9da3ac41c16a795d571615b52e0}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 4 14:16:20.037: INFO: Pod "nginx-deployment-7b8c6f4498-5lgbc" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-5lgbc,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5816,SelfLink:/api/v1/namespaces/deployment-5816/pods/nginx-deployment-7b8c6f4498-5lgbc,UID:cb3ecc52-35ae-4d59-977c-f61106659dff,ResourceVersion:19275719,Generation:0,CreationTimestamp:2020-01-04 14:16:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 636246a6-e83a-4f70-b422-67cb607f2f17 0xc0009317f7 0xc0009317f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-ph7rb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ph7rb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-ph7rb true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000931860} {node.kubernetes.io/unreachable Exists NoExecute 0xc000931880}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:16:08 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 4 14:16:20.037: INFO: Pod "nginx-deployment-7b8c6f4498-5shp9" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-5shp9,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5816,SelfLink:/api/v1/namespaces/deployment-5816/pods/nginx-deployment-7b8c6f4498-5shp9,UID:f6231d8e-ce17-4d23-8647-588cdaf61a9e,ResourceVersion:19275712,Generation:0,CreationTimestamp:2020-01-04 14:16:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 636246a6-e83a-4f70-b422-67cb607f2f17 0xc000931907 0xc000931908}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-ph7rb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ph7rb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-ph7rb true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000931980} {node.kubernetes.io/unreachable Exists NoExecute 0xc0009319a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:16:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:16:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:16:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:16:06 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-01-04 14:16:07 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 4 14:16:20.038: INFO: Pod "nginx-deployment-7b8c6f4498-6bkjg" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-6bkjg,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5816,SelfLink:/api/v1/namespaces/deployment-5816/pods/nginx-deployment-7b8c6f4498-6bkjg,UID:8b7add1f-f879-4aec-8f1a-2f93b50ff33a,ResourceVersion:19275567,Generation:0,CreationTimestamp:2020-01-04 14:15:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 636246a6-e83a-4f70-b422-67cb607f2f17 0xc000931a67 0xc000931a68}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-ph7rb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ph7rb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-ph7rb true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000931ad0} {node.kubernetes.io/unreachable Exists NoExecute 0xc000931af0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:15:15 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:15:50 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:15:50 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:15:15 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.4,StartTime:2020-01-04 14:15:15 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-04 14:15:49 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://040b3e02b063305ec5e33f75f300584da72b761b2969834f8dfa6dc0d34f4fda}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 4 14:16:20.038: INFO: Pod "nginx-deployment-7b8c6f4498-8x6qt" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-8x6qt,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5816,SelfLink:/api/v1/namespaces/deployment-5816/pods/nginx-deployment-7b8c6f4498-8x6qt,UID:96f6d026-931d-43c5-87d8-5f1284bea586,ResourceVersion:19275734,Generation:0,CreationTimestamp:2020-01-04 14:16:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 636246a6-e83a-4f70-b422-67cb607f2f17 0xc000931bc7 0xc000931bc8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-ph7rb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ph7rb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-ph7rb true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000931c30} {node.kubernetes.io/unreachable Exists NoExecute 0xc000931c50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:16:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:16:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:16:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:16:07 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-01-04 14:16:08 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 4 14:16:20.038: INFO: Pod "nginx-deployment-7b8c6f4498-92m9k" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-92m9k,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5816,SelfLink:/api/v1/namespaces/deployment-5816/pods/nginx-deployment-7b8c6f4498-92m9k,UID:ba266ac0-e04a-4fe9-b5fc-4b8258db8e2d,ResourceVersion:19275563,Generation:0,CreationTimestamp:2020-01-04 14:15:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 636246a6-e83a-4f70-b422-67cb607f2f17 0xc000931d27 0xc000931d28}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-ph7rb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ph7rb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-ph7rb true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000931da0} {node.kubernetes.io/unreachable Exists NoExecute 0xc000931dc0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:15:15 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:15:49 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:15:49 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:15:15 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.3,StartTime:2020-01-04 14:15:15 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-04 14:15:48 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://93185bc543104970cfa22d33da02d2a691e5c0a0f6906db61ba88e6c8570590e}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 4 14:16:20.038: INFO: Pod "nginx-deployment-7b8c6f4498-9p6q8" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-9p6q8,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5816,SelfLink:/api/v1/namespaces/deployment-5816/pods/nginx-deployment-7b8c6f4498-9p6q8,UID:dca5722c-3a9c-4c0a-b281-36e899aa53df,ResourceVersion:19275718,Generation:0,CreationTimestamp:2020-01-04 14:16:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 636246a6-e83a-4f70-b422-67cb607f2f17 0xc000931e97 0xc000931e98}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-ph7rb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ph7rb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-ph7rb true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000931f00} {node.kubernetes.io/unreachable Exists NoExecute 0xc000931f20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:16:08 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 4 14:16:20.038: INFO: Pod "nginx-deployment-7b8c6f4498-c6rfb" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-c6rfb,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5816,SelfLink:/api/v1/namespaces/deployment-5816/pods/nginx-deployment-7b8c6f4498-c6rfb,UID:12a763c8-3ba1-4465-80d1-f2a525c16511,ResourceVersion:19275700,Generation:0,CreationTimestamp:2020-01-04 14:16:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 636246a6-e83a-4f70-b422-67cb607f2f17 0xc000931fb7 0xc000931fb8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-ph7rb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ph7rb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-ph7rb true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002ebe030} {node.kubernetes.io/unreachable Exists NoExecute 0xc002ebe050}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:16:07 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 4 14:16:20.038: INFO: Pod "nginx-deployment-7b8c6f4498-dn8f7" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-dn8f7,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5816,SelfLink:/api/v1/namespaces/deployment-5816/pods/nginx-deployment-7b8c6f4498-dn8f7,UID:43cbeb7a-b4e5-4aba-9cf0-1bdab9d63a13,ResourceVersion:19275580,Generation:0,CreationTimestamp:2020-01-04 14:15:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 636246a6-e83a-4f70-b422-67cb607f2f17 0xc002ebe0d7 0xc002ebe0d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-ph7rb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ph7rb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-ph7rb true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002ebe150} {node.kubernetes.io/unreachable Exists NoExecute 0xc002ebe170}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:15:15 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:15:49 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:15:49 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:15:15 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.4,StartTime:2020-01-04 14:15:15 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-04 14:15:48 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://ee79c96ee8c9c467d870eb8581d3ffbe21b1457eb8d3db431d8149576987433f}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 4 14:16:20.039: INFO: Pod "nginx-deployment-7b8c6f4498-jf2lj" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-jf2lj,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5816,SelfLink:/api/v1/namespaces/deployment-5816/pods/nginx-deployment-7b8c6f4498-jf2lj,UID:67ae2ed6-639d-4700-b57d-22ee46784e09,ResourceVersion:19275736,Generation:0,CreationTimestamp:2020-01-04 14:16:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 636246a6-e83a-4f70-b422-67cb607f2f17 0xc002ebe247 0xc002ebe248}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-ph7rb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ph7rb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-ph7rb true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002ebe2c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002ebe2e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:16:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:16:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:16:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:16:07 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-01-04 14:16:08 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 4 14:16:20.039: INFO: Pod "nginx-deployment-7b8c6f4498-k4k5n" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-k4k5n,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5816,SelfLink:/api/v1/namespaces/deployment-5816/pods/nginx-deployment-7b8c6f4498-k4k5n,UID:6ecf4640-5626-4852-9bbb-74b8573a2d92,ResourceVersion:19275584,Generation:0,CreationTimestamp:2020-01-04 14:15:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 636246a6-e83a-4f70-b422-67cb607f2f17 0xc002ebe3a7 0xc002ebe3a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-ph7rb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ph7rb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-ph7rb true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002ebe420} {node.kubernetes.io/unreachable Exists NoExecute 0xc002ebe440}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:15:15 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:15:49 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:15:49 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:15:15 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2020-01-04 14:15:15 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-04 14:15:48 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://ac95e1345fb5bc3df8e9a9749d15ee00a6603c20e5c5133f4b64ac869daa6994}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 4 14:16:20.039: INFO: Pod "nginx-deployment-7b8c6f4498-kk8vr" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-kk8vr,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5816,SelfLink:/api/v1/namespaces/deployment-5816/pods/nginx-deployment-7b8c6f4498-kk8vr,UID:af8769ba-bcb1-4ab0-87f1-9f1ce1ad4e8b,ResourceVersion:19275694,Generation:0,CreationTimestamp:2020-01-04 14:16:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 636246a6-e83a-4f70-b422-67cb607f2f17 0xc002ebe517 0xc002ebe518}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-ph7rb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ph7rb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-ph7rb true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002ebe590} {node.kubernetes.io/unreachable Exists NoExecute 0xc002ebe5b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:16:07 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 4 14:16:20.039: INFO: Pod "nginx-deployment-7b8c6f4498-lq9dv" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-lq9dv,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5816,SelfLink:/api/v1/namespaces/deployment-5816/pods/nginx-deployment-7b8c6f4498-lq9dv,UID:72c26e0d-3955-4a36-9d91-7269c740f743,ResourceVersion:19275596,Generation:0,CreationTimestamp:2020-01-04 14:15:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 636246a6-e83a-4f70-b422-67cb607f2f17 0xc002ebe637 0xc002ebe638}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-ph7rb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ph7rb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-ph7rb true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002ebe6a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002ebe6c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:15:16 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:15:54 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:15:54 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:15:15 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.6,StartTime:2020-01-04 14:15:16 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-04 14:15:53 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://2e7fcf70579349431b7b9b8e121b93a8033bd565238d0c4bc3c0a3b2517fc881}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 4 14:16:20.039: INFO: Pod "nginx-deployment-7b8c6f4498-ml4f9" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-ml4f9,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5816,SelfLink:/api/v1/namespaces/deployment-5816/pods/nginx-deployment-7b8c6f4498-ml4f9,UID:c2ab906a-56e4-4781-9bc4-a6430cc4b553,ResourceVersion:19275594,Generation:0,CreationTimestamp:2020-01-04 14:15:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 636246a6-e83a-4f70-b422-67cb607f2f17 0xc002ebe797 0xc002ebe798}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-ph7rb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ph7rb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-ph7rb true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002ebe800} {node.kubernetes.io/unreachable Exists NoExecute 0xc002ebe820}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:15:15 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:15:54 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:15:54 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:15:15 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.5,StartTime:2020-01-04 14:15:15 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-04 14:15:52 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://f503d63553245af76d4eb9d564c052974260d53d5145f00e0ed5f32c94e039c6}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 4 14:16:20.039: INFO: Pod "nginx-deployment-7b8c6f4498-p4cfp" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-p4cfp,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5816,SelfLink:/api/v1/namespaces/deployment-5816/pods/nginx-deployment-7b8c6f4498-p4cfp,UID:c4a2decf-2863-48fe-9248-12abb4bdd1a9,ResourceVersion:19275724,Generation:0,CreationTimestamp:2020-01-04 14:16:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 636246a6-e83a-4f70-b422-67cb607f2f17 0xc002ebe8f7 0xc002ebe8f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-ph7rb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ph7rb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-ph7rb true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002ebe960} {node.kubernetes.io/unreachable Exists NoExecute 0xc002ebe980}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:16:08 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 4 14:16:20.039: INFO: Pod "nginx-deployment-7b8c6f4498-pnj4m" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-pnj4m,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5816,SelfLink:/api/v1/namespaces/deployment-5816/pods/nginx-deployment-7b8c6f4498-pnj4m,UID:dff90690-485c-4049-a9bc-c7d6101369f5,ResourceVersion:19275706,Generation:0,CreationTimestamp:2020-01-04 14:16:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 636246a6-e83a-4f70-b422-67cb607f2f17 0xc002ebea07 0xc002ebea08}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-ph7rb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ph7rb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-ph7rb true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002ebea70} {node.kubernetes.io/unreachable Exists NoExecute 0xc002ebea90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:16:07 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 4 14:16:20.040: INFO: Pod "nginx-deployment-7b8c6f4498-qrqbb" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-qrqbb,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5816,SelfLink:/api/v1/namespaces/deployment-5816/pods/nginx-deployment-7b8c6f4498-qrqbb,UID:d57081f9-3089-4375-8ea4-b5cf62e86af6,ResourceVersion:19275716,Generation:0,CreationTimestamp:2020-01-04 14:16:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 636246a6-e83a-4f70-b422-67cb607f2f17 0xc002ebeb17 0xc002ebeb18}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-ph7rb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ph7rb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-ph7rb true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002ebeb90} {node.kubernetes.io/unreachable Exists NoExecute 0xc002ebebb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:16:08 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 4 14:16:20.040: INFO: Pod "nginx-deployment-7b8c6f4498-spbhz" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-spbhz,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5816,SelfLink:/api/v1/namespaces/deployment-5816/pods/nginx-deployment-7b8c6f4498-spbhz,UID:f33b0233-aa62-4cce-8b11-79f0828038b2,ResourceVersion:19275575,Generation:0,CreationTimestamp:2020-01-04 14:15:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 636246a6-e83a-4f70-b422-67cb607f2f17 0xc002ebec37 0xc002ebec38}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-ph7rb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ph7rb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-ph7rb true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002ebecb0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002ebecd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:15:19 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:15:49 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:15:49 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:15:15 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.5,StartTime:2020-01-04 14:15:19 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-04 14:15:48 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://3b6f1b1744608a62f5bdcfb95594ea628ea72ede8c8d91258aa55851d76428bf}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 14:16:20.040: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-5816" for this suite. Jan 4 14:17:33.004: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 14:17:33.127: INFO: namespace deployment-5816 deletion completed in 1m10.818350916s • [SLOW TEST:138.065 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 14:17:33.128: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on node default medium Jan 4 14:17:33.304: INFO: Waiting up to 5m0s for pod "pod-f7ab19ce-f67d-4a24-9b8a-50c7bc701912" in namespace "emptydir-5780" to be "success or failure" Jan 4 14:17:33.323: INFO: Pod "pod-f7ab19ce-f67d-4a24-9b8a-50c7bc701912": Phase="Pending", Reason="", readiness=false. Elapsed: 18.106885ms Jan 4 14:17:35.332: INFO: Pod "pod-f7ab19ce-f67d-4a24-9b8a-50c7bc701912": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027157365s Jan 4 14:17:37.343: INFO: Pod "pod-f7ab19ce-f67d-4a24-9b8a-50c7bc701912": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038066642s Jan 4 14:17:39.349: INFO: Pod "pod-f7ab19ce-f67d-4a24-9b8a-50c7bc701912": Phase="Pending", Reason="", readiness=false. Elapsed: 6.044304064s Jan 4 14:17:41.356: INFO: Pod "pod-f7ab19ce-f67d-4a24-9b8a-50c7bc701912": Phase="Pending", Reason="", readiness=false. Elapsed: 8.051835704s Jan 4 14:17:43.369: INFO: Pod "pod-f7ab19ce-f67d-4a24-9b8a-50c7bc701912": Phase="Pending", Reason="", readiness=false. Elapsed: 10.064216626s Jan 4 14:17:45.377: INFO: Pod "pod-f7ab19ce-f67d-4a24-9b8a-50c7bc701912": Phase="Running", Reason="", readiness=true. Elapsed: 12.072225972s Jan 4 14:17:47.387: INFO: Pod "pod-f7ab19ce-f67d-4a24-9b8a-50c7bc701912": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.082282302s STEP: Saw pod success Jan 4 14:17:47.387: INFO: Pod "pod-f7ab19ce-f67d-4a24-9b8a-50c7bc701912" satisfied condition "success or failure" Jan 4 14:17:47.392: INFO: Trying to get logs from node iruya-node pod pod-f7ab19ce-f67d-4a24-9b8a-50c7bc701912 container test-container: STEP: delete the pod Jan 4 14:17:47.491: INFO: Waiting for pod pod-f7ab19ce-f67d-4a24-9b8a-50c7bc701912 to disappear Jan 4 14:17:47.584: INFO: Pod pod-f7ab19ce-f67d-4a24-9b8a-50c7bc701912 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 14:17:47.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5780" for this suite. Jan 4 14:17:53.627: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 14:17:53.753: INFO: namespace emptydir-5780 deletion completed in 6.157766098s • [SLOW TEST:20.625 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 14:17:53.754: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: validating api versions Jan 4 14:17:53.903: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' Jan 4 14:17:54.128: INFO: stderr: "" Jan 4 14:17:54.128: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 14:17:54.129: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5323" for this suite. Jan 4 14:18:00.169: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 14:18:00.264: INFO: namespace kubectl-5323 deletion completed in 6.123435682s • [SLOW TEST:6.510 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl api-versions /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 14:18:00.264: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Jan 4 14:18:28.615: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3438 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 4 14:18:28.616: INFO: >>> kubeConfig: /root/.kube/config I0104 14:18:28.833662 8 log.go:172] (0xc0008c46e0) (0xc0020b9e00) Create stream I0104 14:18:28.833864 8 log.go:172] (0xc0008c46e0) (0xc0020b9e00) Stream added, broadcasting: 1 I0104 14:18:28.848286 8 log.go:172] (0xc0008c46e0) Reply frame received for 1 I0104 14:18:28.848476 8 log.go:172] (0xc0008c46e0) (0xc0019d2820) Create stream I0104 14:18:28.848493 8 log.go:172] (0xc0008c46e0) (0xc0019d2820) Stream added, broadcasting: 3 I0104 14:18:28.855597 8 log.go:172] (0xc0008c46e0) Reply frame received for 3 I0104 14:18:28.855687 8 log.go:172] (0xc0008c46e0) (0xc0019d28c0) Create stream I0104 14:18:28.855704 8 log.go:172] (0xc0008c46e0) (0xc0019d28c0) Stream added, broadcasting: 5 I0104 14:18:28.863512 8 log.go:172] (0xc0008c46e0) Reply frame received for 5 I0104 14:18:29.094563 8 log.go:172] (0xc0008c46e0) Data frame received for 3 I0104 14:18:29.094690 8 log.go:172] (0xc0019d2820) (3) Data frame handling I0104 14:18:29.094739 8 log.go:172] (0xc0019d2820) (3) Data frame sent I0104 14:18:29.270352 8 log.go:172] (0xc0008c46e0) (0xc0019d2820) Stream removed, broadcasting: 3 I0104 14:18:29.270640 8 log.go:172] (0xc0008c46e0) Data frame received for 1 I0104 14:18:29.270693 8 log.go:172] (0xc0020b9e00) (1) Data frame handling I0104 14:18:29.270723 8 log.go:172] (0xc0020b9e00) (1) Data frame sent I0104 14:18:29.270775 8 log.go:172] (0xc0008c46e0) (0xc0020b9e00) Stream removed, broadcasting: 1 I0104 14:18:29.270800 8 log.go:172] (0xc0008c46e0) (0xc0019d28c0) Stream removed, broadcasting: 5 I0104 14:18:29.270823 8 log.go:172] (0xc0008c46e0) Go away received I0104 14:18:29.271121 8 log.go:172] (0xc0008c46e0) (0xc0020b9e00) Stream removed, broadcasting: 1 I0104 14:18:29.271141 8 log.go:172] (0xc0008c46e0) (0xc0019d2820) Stream removed, broadcasting: 3 I0104 14:18:29.271159 8 log.go:172] (0xc0008c46e0) (0xc0019d28c0) Stream removed, broadcasting: 5 Jan 4 14:18:29.271: INFO: Exec stderr: "" Jan 4 14:18:29.271: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3438 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 4 14:18:29.271: INFO: >>> kubeConfig: /root/.kube/config I0104 14:18:29.385689 8 log.go:172] (0xc0012048f0) (0xc000388f00) Create stream I0104 14:18:29.385872 8 log.go:172] (0xc0012048f0) (0xc000388f00) Stream added, broadcasting: 1 I0104 14:18:29.398166 8 log.go:172] (0xc0012048f0) Reply frame received for 1 I0104 14:18:29.398219 8 log.go:172] (0xc0012048f0) (0xc001ca05a0) Create stream I0104 14:18:29.398226 8 log.go:172] (0xc0012048f0) (0xc001ca05a0) Stream added, broadcasting: 3 I0104 14:18:29.400808 8 log.go:172] (0xc0012048f0) Reply frame received for 3 I0104 14:18:29.400871 8 log.go:172] (0xc0012048f0) (0xc001ca0640) Create stream I0104 14:18:29.400885 8 log.go:172] (0xc0012048f0) (0xc001ca0640) Stream added, broadcasting: 5 I0104 14:18:29.403811 8 log.go:172] (0xc0012048f0) Reply frame received for 5 I0104 14:18:29.586467 8 log.go:172] (0xc0012048f0) Data frame received for 3 I0104 14:18:29.586589 8 log.go:172] (0xc001ca05a0) (3) Data frame handling I0104 14:18:29.586612 8 log.go:172] (0xc001ca05a0) (3) Data frame sent I0104 14:18:29.705888 8 log.go:172] (0xc0012048f0) (0xc001ca05a0) Stream removed, broadcasting: 3 I0104 14:18:29.706257 8 log.go:172] (0xc0012048f0) Data frame received for 1 I0104 14:18:29.706288 8 log.go:172] (0xc000388f00) (1) Data frame handling I0104 14:18:29.706316 8 log.go:172] (0xc000388f00) (1) Data frame sent I0104 14:18:29.706326 8 log.go:172] (0xc0012048f0) (0xc000388f00) Stream removed, broadcasting: 1 I0104 14:18:29.706419 8 log.go:172] (0xc0012048f0) (0xc001ca0640) Stream removed, broadcasting: 5 I0104 14:18:29.706456 8 log.go:172] (0xc0012048f0) Go away received I0104 14:18:29.706647 8 log.go:172] (0xc0012048f0) (0xc000388f00) Stream removed, broadcasting: 1 I0104 14:18:29.706656 8 log.go:172] (0xc0012048f0) (0xc001ca05a0) Stream removed, broadcasting: 3 I0104 14:18:29.706661 8 log.go:172] (0xc0012048f0) (0xc001ca0640) Stream removed, broadcasting: 5 Jan 4 14:18:29.706: INFO: Exec stderr: "" Jan 4 14:18:29.706: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3438 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 4 14:18:29.706: INFO: >>> kubeConfig: /root/.kube/config I0104 14:18:30.261312 8 log.go:172] (0xc0008c5340) (0xc00110c140) Create stream I0104 14:18:30.261400 8 log.go:172] (0xc0008c5340) (0xc00110c140) Stream added, broadcasting: 1 I0104 14:18:30.271414 8 log.go:172] (0xc0008c5340) Reply frame received for 1 I0104 14:18:30.271577 8 log.go:172] (0xc0008c5340) (0xc001ca06e0) Create stream I0104 14:18:30.271591 8 log.go:172] (0xc0008c5340) (0xc001ca06e0) Stream added, broadcasting: 3 I0104 14:18:30.280501 8 log.go:172] (0xc0008c5340) Reply frame received for 3 I0104 14:18:30.280560 8 log.go:172] (0xc0008c5340) (0xc00110c1e0) Create stream I0104 14:18:30.280595 8 log.go:172] (0xc0008c5340) (0xc00110c1e0) Stream added, broadcasting: 5 I0104 14:18:30.283133 8 log.go:172] (0xc0008c5340) Reply frame received for 5 I0104 14:18:30.525686 8 log.go:172] (0xc0008c5340) Data frame received for 3 I0104 14:18:30.525899 8 log.go:172] (0xc001ca06e0) (3) Data frame handling I0104 14:18:30.525950 8 log.go:172] (0xc001ca06e0) (3) Data frame sent I0104 14:18:30.856025 8 log.go:172] (0xc0008c5340) (0xc001ca06e0) Stream removed, broadcasting: 3 I0104 14:18:30.856300 8 log.go:172] (0xc0008c5340) Data frame received for 1 I0104 14:18:30.856314 8 log.go:172] (0xc00110c140) (1) Data frame handling I0104 14:18:30.856328 8 log.go:172] (0xc00110c140) (1) Data frame sent I0104 14:18:30.856332 8 log.go:172] (0xc0008c5340) (0xc00110c140) Stream removed, broadcasting: 1 I0104 14:18:30.856629 8 log.go:172] (0xc0008c5340) (0xc00110c1e0) Stream removed, broadcasting: 5 I0104 14:18:30.856669 8 log.go:172] (0xc0008c5340) (0xc00110c140) Stream removed, broadcasting: 1 I0104 14:18:30.856710 8 log.go:172] (0xc0008c5340) (0xc001ca06e0) Stream removed, broadcasting: 3 I0104 14:18:30.856993 8 log.go:172] (0xc0008c5340) (0xc00110c1e0) Stream removed, broadcasting: 5 I0104 14:18:30.857074 8 log.go:172] (0xc0008c5340) Go away received Jan 4 14:18:30.857: INFO: Exec stderr: "" Jan 4 14:18:30.857: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3438 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 4 14:18:30.857: INFO: >>> kubeConfig: /root/.kube/config I0104 14:18:30.937347 8 log.go:172] (0xc000b17290) (0xc001ca0a00) Create stream I0104 14:18:30.937683 8 log.go:172] (0xc000b17290) (0xc001ca0a00) Stream added, broadcasting: 1 I0104 14:18:30.947322 8 log.go:172] (0xc000b17290) Reply frame received for 1 I0104 14:18:30.947402 8 log.go:172] (0xc000b17290) (0xc0019d2960) Create stream I0104 14:18:30.947408 8 log.go:172] (0xc000b17290) (0xc0019d2960) Stream added, broadcasting: 3 I0104 14:18:30.948679 8 log.go:172] (0xc000b17290) Reply frame received for 3 I0104 14:18:30.948697 8 log.go:172] (0xc000b17290) (0xc0019d2a00) Create stream I0104 14:18:30.948705 8 log.go:172] (0xc000b17290) (0xc0019d2a00) Stream added, broadcasting: 5 I0104 14:18:30.949555 8 log.go:172] (0xc000b17290) Reply frame received for 5 I0104 14:18:31.028736 8 log.go:172] (0xc000b17290) Data frame received for 3 I0104 14:18:31.028809 8 log.go:172] (0xc0019d2960) (3) Data frame handling I0104 14:18:31.028825 8 log.go:172] (0xc0019d2960) (3) Data frame sent I0104 14:18:31.154755 8 log.go:172] (0xc000b17290) Data frame received for 1 I0104 14:18:31.155188 8 log.go:172] (0xc000b17290) (0xc0019d2a00) Stream removed, broadcasting: 5 I0104 14:18:31.155259 8 log.go:172] (0xc001ca0a00) (1) Data frame handling I0104 14:18:31.155281 8 log.go:172] (0xc001ca0a00) (1) Data frame sent I0104 14:18:31.155317 8 log.go:172] (0xc000b17290) (0xc0019d2960) Stream removed, broadcasting: 3 I0104 14:18:31.155339 8 log.go:172] (0xc000b17290) (0xc001ca0a00) Stream removed, broadcasting: 1 I0104 14:18:31.155351 8 log.go:172] (0xc000b17290) Go away received I0104 14:18:31.156038 8 log.go:172] (0xc000b17290) (0xc001ca0a00) Stream removed, broadcasting: 1 I0104 14:18:31.156062 8 log.go:172] (0xc000b17290) (0xc0019d2960) Stream removed, broadcasting: 3 I0104 14:18:31.156088 8 log.go:172] (0xc000b17290) (0xc0019d2a00) Stream removed, broadcasting: 5 Jan 4 14:18:31.156: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Jan 4 14:18:31.156: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3438 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 4 14:18:31.156: INFO: >>> kubeConfig: /root/.kube/config I0104 14:18:31.210801 8 log.go:172] (0xc0012053f0) (0xc000389680) Create stream I0104 14:18:31.210893 8 log.go:172] (0xc0012053f0) (0xc000389680) Stream added, broadcasting: 1 I0104 14:18:31.217919 8 log.go:172] (0xc0012053f0) Reply frame received for 1 I0104 14:18:31.217991 8 log.go:172] (0xc0012053f0) (0xc001ca0b40) Create stream I0104 14:18:31.218018 8 log.go:172] (0xc0012053f0) (0xc001ca0b40) Stream added, broadcasting: 3 I0104 14:18:31.219495 8 log.go:172] (0xc0012053f0) Reply frame received for 3 I0104 14:18:31.219516 8 log.go:172] (0xc0012053f0) (0xc000389720) Create stream I0104 14:18:31.219524 8 log.go:172] (0xc0012053f0) (0xc000389720) Stream added, broadcasting: 5 I0104 14:18:31.221792 8 log.go:172] (0xc0012053f0) Reply frame received for 5 I0104 14:18:31.303810 8 log.go:172] (0xc0012053f0) Data frame received for 3 I0104 14:18:31.303840 8 log.go:172] (0xc001ca0b40) (3) Data frame handling I0104 14:18:31.303858 8 log.go:172] (0xc001ca0b40) (3) Data frame sent I0104 14:18:31.449837 8 log.go:172] (0xc0012053f0) Data frame received for 1 I0104 14:18:31.450034 8 log.go:172] (0xc0012053f0) (0xc001ca0b40) Stream removed, broadcasting: 3 I0104 14:18:31.450098 8 log.go:172] (0xc000389680) (1) Data frame handling I0104 14:18:31.450183 8 log.go:172] (0xc000389680) (1) Data frame sent I0104 14:18:31.450471 8 log.go:172] (0xc0012053f0) (0xc000389720) Stream removed, broadcasting: 5 I0104 14:18:31.450844 8 log.go:172] (0xc0012053f0) (0xc000389680) Stream removed, broadcasting: 1 I0104 14:18:31.450880 8 log.go:172] (0xc0012053f0) Go away received I0104 14:18:31.451183 8 log.go:172] (0xc0012053f0) (0xc000389680) Stream removed, broadcasting: 1 I0104 14:18:31.451210 8 log.go:172] (0xc0012053f0) (0xc001ca0b40) Stream removed, broadcasting: 3 I0104 14:18:31.451229 8 log.go:172] (0xc0012053f0) (0xc000389720) Stream removed, broadcasting: 5 Jan 4 14:18:31.451: INFO: Exec stderr: "" Jan 4 14:18:31.451: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3438 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 4 14:18:31.451: INFO: >>> kubeConfig: /root/.kube/config I0104 14:18:31.535758 8 log.go:172] (0xc001bb00b0) (0xc000389d60) Create stream I0104 14:18:31.535954 8 log.go:172] (0xc001bb00b0) (0xc000389d60) Stream added, broadcasting: 1 I0104 14:18:31.545556 8 log.go:172] (0xc001bb00b0) Reply frame received for 1 I0104 14:18:31.545590 8 log.go:172] (0xc001bb00b0) (0xc0019d2aa0) Create stream I0104 14:18:31.545600 8 log.go:172] (0xc001bb00b0) (0xc0019d2aa0) Stream added, broadcasting: 3 I0104 14:18:31.548869 8 log.go:172] (0xc001bb00b0) Reply frame received for 3 I0104 14:18:31.548976 8 log.go:172] (0xc001bb00b0) (0xc0018aa000) Create stream I0104 14:18:31.548991 8 log.go:172] (0xc001bb00b0) (0xc0018aa000) Stream added, broadcasting: 5 I0104 14:18:31.551279 8 log.go:172] (0xc001bb00b0) Reply frame received for 5 I0104 14:18:31.650995 8 log.go:172] (0xc001bb00b0) Data frame received for 3 I0104 14:18:31.651101 8 log.go:172] (0xc0019d2aa0) (3) Data frame handling I0104 14:18:31.651144 8 log.go:172] (0xc0019d2aa0) (3) Data frame sent I0104 14:18:31.960246 8 log.go:172] (0xc001bb00b0) (0xc0018aa000) Stream removed, broadcasting: 5 I0104 14:18:31.960534 8 log.go:172] (0xc001bb00b0) (0xc0019d2aa0) Stream removed, broadcasting: 3 I0104 14:18:31.960592 8 log.go:172] (0xc001bb00b0) Data frame received for 1 I0104 14:18:31.960716 8 log.go:172] (0xc000389d60) (1) Data frame handling I0104 14:18:31.960762 8 log.go:172] (0xc000389d60) (1) Data frame sent I0104 14:18:31.960783 8 log.go:172] (0xc001bb00b0) (0xc000389d60) Stream removed, broadcasting: 1 I0104 14:18:31.960816 8 log.go:172] (0xc001bb00b0) Go away received I0104 14:18:31.961088 8 log.go:172] (0xc001bb00b0) (0xc000389d60) Stream removed, broadcasting: 1 I0104 14:18:31.961103 8 log.go:172] (0xc001bb00b0) (0xc0019d2aa0) Stream removed, broadcasting: 3 I0104 14:18:31.961113 8 log.go:172] (0xc001bb00b0) (0xc0018aa000) Stream removed, broadcasting: 5 Jan 4 14:18:31.961: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Jan 4 14:18:31.961: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3438 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 4 14:18:31.961: INFO: >>> kubeConfig: /root/.kube/config I0104 14:18:32.060912 8 log.go:172] (0xc001b046e0) (0xc001ca1040) Create stream I0104 14:18:32.061180 8 log.go:172] (0xc001b046e0) (0xc001ca1040) Stream added, broadcasting: 1 I0104 14:18:32.086767 8 log.go:172] (0xc001b046e0) Reply frame received for 1 I0104 14:18:32.086993 8 log.go:172] (0xc001b046e0) (0xc000389ea0) Create stream I0104 14:18:32.087013 8 log.go:172] (0xc001b046e0) (0xc000389ea0) Stream added, broadcasting: 3 I0104 14:18:32.093295 8 log.go:172] (0xc001b046e0) Reply frame received for 3 I0104 14:18:32.093367 8 log.go:172] (0xc001b046e0) (0xc0019d2d20) Create stream I0104 14:18:32.093379 8 log.go:172] (0xc001b046e0) (0xc0019d2d20) Stream added, broadcasting: 5 I0104 14:18:32.095778 8 log.go:172] (0xc001b046e0) Reply frame received for 5 I0104 14:18:32.256204 8 log.go:172] (0xc001b046e0) Data frame received for 3 I0104 14:18:32.256312 8 log.go:172] (0xc000389ea0) (3) Data frame handling I0104 14:18:32.256403 8 log.go:172] (0xc000389ea0) (3) Data frame sent I0104 14:18:32.461917 8 log.go:172] (0xc001b046e0) Data frame received for 1 I0104 14:18:32.462068 8 log.go:172] (0xc001b046e0) (0xc000389ea0) Stream removed, broadcasting: 3 I0104 14:18:32.462098 8 log.go:172] (0xc001ca1040) (1) Data frame handling I0104 14:18:32.462116 8 log.go:172] (0xc001ca1040) (1) Data frame sent I0104 14:18:32.462291 8 log.go:172] (0xc001b046e0) (0xc0019d2d20) Stream removed, broadcasting: 5 I0104 14:18:32.462502 8 log.go:172] (0xc001b046e0) (0xc001ca1040) Stream removed, broadcasting: 1 I0104 14:18:32.462538 8 log.go:172] (0xc001b046e0) Go away received I0104 14:18:32.462853 8 log.go:172] (0xc001b046e0) (0xc001ca1040) Stream removed, broadcasting: 1 I0104 14:18:32.462895 8 log.go:172] (0xc001b046e0) (0xc000389ea0) Stream removed, broadcasting: 3 I0104 14:18:32.462913 8 log.go:172] (0xc001b046e0) (0xc0019d2d20) Stream removed, broadcasting: 5 Jan 4 14:18:32.462: INFO: Exec stderr: "" Jan 4 14:18:32.463: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3438 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 4 14:18:32.463: INFO: >>> kubeConfig: /root/.kube/config I0104 14:18:32.536848 8 log.go:172] (0xc001bb0d10) (0xc000389f40) Create stream I0104 14:18:32.536992 8 log.go:172] (0xc001bb0d10) (0xc000389f40) Stream added, broadcasting: 1 I0104 14:18:32.547833 8 log.go:172] (0xc001bb0d10) Reply frame received for 1 I0104 14:18:32.547901 8 log.go:172] (0xc001bb0d10) (0xc00110c460) Create stream I0104 14:18:32.547957 8 log.go:172] (0xc001bb0d10) (0xc00110c460) Stream added, broadcasting: 3 I0104 14:18:32.550841 8 log.go:172] (0xc001bb0d10) Reply frame received for 3 I0104 14:18:32.550942 8 log.go:172] (0xc001bb0d10) (0xc001dc80a0) Create stream I0104 14:18:32.550954 8 log.go:172] (0xc001bb0d10) (0xc001dc80a0) Stream added, broadcasting: 5 I0104 14:18:32.553271 8 log.go:172] (0xc001bb0d10) Reply frame received for 5 I0104 14:18:32.696758 8 log.go:172] (0xc001bb0d10) Data frame received for 3 I0104 14:18:32.696867 8 log.go:172] (0xc00110c460) (3) Data frame handling I0104 14:18:32.696887 8 log.go:172] (0xc00110c460) (3) Data frame sent I0104 14:18:32.799076 8 log.go:172] (0xc001bb0d10) Data frame received for 1 I0104 14:18:32.799248 8 log.go:172] (0xc001bb0d10) (0xc001dc80a0) Stream removed, broadcasting: 5 I0104 14:18:32.799309 8 log.go:172] (0xc000389f40) (1) Data frame handling I0104 14:18:32.799324 8 log.go:172] (0xc000389f40) (1) Data frame sent I0104 14:18:32.799353 8 log.go:172] (0xc001bb0d10) (0xc00110c460) Stream removed, broadcasting: 3 I0104 14:18:32.799383 8 log.go:172] (0xc001bb0d10) (0xc000389f40) Stream removed, broadcasting: 1 I0104 14:18:32.799397 8 log.go:172] (0xc001bb0d10) Go away received I0104 14:18:32.799965 8 log.go:172] (0xc001bb0d10) (0xc000389f40) Stream removed, broadcasting: 1 I0104 14:18:32.800099 8 log.go:172] (0xc001bb0d10) (0xc00110c460) Stream removed, broadcasting: 3 I0104 14:18:32.800112 8 log.go:172] (0xc001bb0d10) (0xc001dc80a0) Stream removed, broadcasting: 5 Jan 4 14:18:32.800: INFO: Exec stderr: "" Jan 4 14:18:32.800: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3438 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 4 14:18:32.800: INFO: >>> kubeConfig: /root/.kube/config I0104 14:18:32.886227 8 log.go:172] (0xc0018308f0) (0xc0018aa320) Create stream I0104 14:18:32.886511 8 log.go:172] (0xc0018308f0) (0xc0018aa320) Stream added, broadcasting: 1 I0104 14:18:32.912134 8 log.go:172] (0xc0018308f0) Reply frame received for 1 I0104 14:18:32.912246 8 log.go:172] (0xc0018308f0) (0xc00110c500) Create stream I0104 14:18:32.912252 8 log.go:172] (0xc0018308f0) (0xc00110c500) Stream added, broadcasting: 3 I0104 14:18:32.916074 8 log.go:172] (0xc0018308f0) Reply frame received for 3 I0104 14:18:32.916136 8 log.go:172] (0xc0018308f0) (0xc0018aa3c0) Create stream I0104 14:18:32.916151 8 log.go:172] (0xc0018308f0) (0xc0018aa3c0) Stream added, broadcasting: 5 I0104 14:18:32.918286 8 log.go:172] (0xc0018308f0) Reply frame received for 5 I0104 14:18:33.101249 8 log.go:172] (0xc0018308f0) Data frame received for 3 I0104 14:18:33.101362 8 log.go:172] (0xc00110c500) (3) Data frame handling I0104 14:18:33.101381 8 log.go:172] (0xc00110c500) (3) Data frame sent I0104 14:18:33.200401 8 log.go:172] (0xc0018308f0) (0xc0018aa3c0) Stream removed, broadcasting: 5 I0104 14:18:33.200591 8 log.go:172] (0xc0018308f0) Data frame received for 1 I0104 14:18:33.200608 8 log.go:172] (0xc0018aa320) (1) Data frame handling I0104 14:18:33.200639 8 log.go:172] (0xc0018aa320) (1) Data frame sent I0104 14:18:33.200674 8 log.go:172] (0xc0018308f0) (0xc0018aa320) Stream removed, broadcasting: 1 I0104 14:18:33.200950 8 log.go:172] (0xc0018308f0) (0xc00110c500) Stream removed, broadcasting: 3 I0104 14:18:33.201018 8 log.go:172] (0xc0018308f0) Go away received I0104 14:18:33.201335 8 log.go:172] (0xc0018308f0) (0xc0018aa320) Stream removed, broadcasting: 1 I0104 14:18:33.201400 8 log.go:172] (0xc0018308f0) (0xc00110c500) Stream removed, broadcasting: 3 I0104 14:18:33.201422 8 log.go:172] (0xc0018308f0) (0xc0018aa3c0) Stream removed, broadcasting: 5 Jan 4 14:18:33.201: INFO: Exec stderr: "" Jan 4 14:18:33.201: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3438 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 4 14:18:33.201: INFO: >>> kubeConfig: /root/.kube/config I0104 14:18:33.256901 8 log.go:172] (0xc00255e210) (0xc001dc8500) Create stream I0104 14:18:33.256995 8 log.go:172] (0xc00255e210) (0xc001dc8500) Stream added, broadcasting: 1 I0104 14:18:33.264754 8 log.go:172] (0xc00255e210) Reply frame received for 1 I0104 14:18:33.264836 8 log.go:172] (0xc00255e210) (0xc0018aa460) Create stream I0104 14:18:33.264846 8 log.go:172] (0xc00255e210) (0xc0018aa460) Stream added, broadcasting: 3 I0104 14:18:33.267227 8 log.go:172] (0xc00255e210) Reply frame received for 3 I0104 14:18:33.267282 8 log.go:172] (0xc00255e210) (0xc001dc8640) Create stream I0104 14:18:33.267294 8 log.go:172] (0xc00255e210) (0xc001dc8640) Stream added, broadcasting: 5 I0104 14:18:33.269140 8 log.go:172] (0xc00255e210) Reply frame received for 5 I0104 14:18:33.347484 8 log.go:172] (0xc00255e210) Data frame received for 3 I0104 14:18:33.347511 8 log.go:172] (0xc0018aa460) (3) Data frame handling I0104 14:18:33.347531 8 log.go:172] (0xc0018aa460) (3) Data frame sent I0104 14:18:33.466355 8 log.go:172] (0xc00255e210) (0xc0018aa460) Stream removed, broadcasting: 3 I0104 14:18:33.466729 8 log.go:172] (0xc00255e210) Data frame received for 1 I0104 14:18:33.466859 8 log.go:172] (0xc00255e210) (0xc001dc8640) Stream removed, broadcasting: 5 I0104 14:18:33.466937 8 log.go:172] (0xc001dc8500) (1) Data frame handling I0104 14:18:33.466952 8 log.go:172] (0xc001dc8500) (1) Data frame sent I0104 14:18:33.466968 8 log.go:172] (0xc00255e210) (0xc001dc8500) Stream removed, broadcasting: 1 I0104 14:18:33.466978 8 log.go:172] (0xc00255e210) Go away received I0104 14:18:33.467275 8 log.go:172] (0xc00255e210) (0xc001dc8500) Stream removed, broadcasting: 1 I0104 14:18:33.467293 8 log.go:172] (0xc00255e210) (0xc0018aa460) Stream removed, broadcasting: 3 I0104 14:18:33.467305 8 log.go:172] (0xc00255e210) (0xc001dc8640) Stream removed, broadcasting: 5 Jan 4 14:18:33.467: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 14:18:33.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-3438" for this suite. Jan 4 14:19:37.564: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 14:19:37.690: INFO: namespace e2e-kubelet-etc-hosts-3438 deletion completed in 1m4.212602328s • [SLOW TEST:97.427 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 14:19:37.691: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's args Jan 4 14:19:37.945: INFO: Waiting up to 5m0s for pod "var-expansion-48085ae0-9c50-4521-a524-2c7fcc2c90da" in namespace "var-expansion-8545" to be "success or failure" Jan 4 14:19:38.051: INFO: Pod "var-expansion-48085ae0-9c50-4521-a524-2c7fcc2c90da": Phase="Pending", Reason="", readiness=false. Elapsed: 106.254179ms Jan 4 14:19:40.060: INFO: Pod "var-expansion-48085ae0-9c50-4521-a524-2c7fcc2c90da": Phase="Pending", Reason="", readiness=false. Elapsed: 2.115085291s Jan 4 14:19:42.075: INFO: Pod "var-expansion-48085ae0-9c50-4521-a524-2c7fcc2c90da": Phase="Pending", Reason="", readiness=false. Elapsed: 4.130018943s Jan 4 14:19:44.091: INFO: Pod "var-expansion-48085ae0-9c50-4521-a524-2c7fcc2c90da": Phase="Pending", Reason="", readiness=false. Elapsed: 6.146282031s Jan 4 14:19:46.101: INFO: Pod "var-expansion-48085ae0-9c50-4521-a524-2c7fcc2c90da": Phase="Pending", Reason="", readiness=false. Elapsed: 8.155530085s Jan 4 14:19:48.112: INFO: Pod "var-expansion-48085ae0-9c50-4521-a524-2c7fcc2c90da": Phase="Pending", Reason="", readiness=false. Elapsed: 10.167288958s Jan 4 14:19:50.123: INFO: Pod "var-expansion-48085ae0-9c50-4521-a524-2c7fcc2c90da": Phase="Pending", Reason="", readiness=false. Elapsed: 12.177605287s Jan 4 14:19:52.157: INFO: Pod "var-expansion-48085ae0-9c50-4521-a524-2c7fcc2c90da": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.212028842s STEP: Saw pod success Jan 4 14:19:52.158: INFO: Pod "var-expansion-48085ae0-9c50-4521-a524-2c7fcc2c90da" satisfied condition "success or failure" Jan 4 14:19:52.168: INFO: Trying to get logs from node iruya-node pod var-expansion-48085ae0-9c50-4521-a524-2c7fcc2c90da container dapi-container: STEP: delete the pod Jan 4 14:19:52.581: INFO: Waiting for pod var-expansion-48085ae0-9c50-4521-a524-2c7fcc2c90da to disappear Jan 4 14:19:52.661: INFO: Pod var-expansion-48085ae0-9c50-4521-a524-2c7fcc2c90da no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 14:19:52.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-8545" for this suite. Jan 4 14:19:58.729: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 14:19:58.803: INFO: namespace var-expansion-8545 deletion completed in 6.126308196s • [SLOW TEST:21.112 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 14:19:58.804: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on tmpfs Jan 4 14:19:58.914: INFO: Waiting up to 5m0s for pod "pod-9071b53a-4c3e-44d2-8359-63e2e3165164" in namespace "emptydir-9221" to be "success or failure" Jan 4 14:19:58.954: INFO: Pod "pod-9071b53a-4c3e-44d2-8359-63e2e3165164": Phase="Pending", Reason="", readiness=false. Elapsed: 39.218994ms Jan 4 14:20:00.976: INFO: Pod "pod-9071b53a-4c3e-44d2-8359-63e2e3165164": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061350501s Jan 4 14:20:02.995: INFO: Pod "pod-9071b53a-4c3e-44d2-8359-63e2e3165164": Phase="Pending", Reason="", readiness=false. Elapsed: 4.080627781s Jan 4 14:20:05.001: INFO: Pod "pod-9071b53a-4c3e-44d2-8359-63e2e3165164": Phase="Pending", Reason="", readiness=false. Elapsed: 6.086645103s Jan 4 14:20:07.009: INFO: Pod "pod-9071b53a-4c3e-44d2-8359-63e2e3165164": Phase="Pending", Reason="", readiness=false. Elapsed: 8.094169604s Jan 4 14:20:09.018: INFO: Pod "pod-9071b53a-4c3e-44d2-8359-63e2e3165164": Phase="Pending", Reason="", readiness=false. Elapsed: 10.10353366s Jan 4 14:20:11.027: INFO: Pod "pod-9071b53a-4c3e-44d2-8359-63e2e3165164": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.112736316s STEP: Saw pod success Jan 4 14:20:11.028: INFO: Pod "pod-9071b53a-4c3e-44d2-8359-63e2e3165164" satisfied condition "success or failure" Jan 4 14:20:11.031: INFO: Trying to get logs from node iruya-node pod pod-9071b53a-4c3e-44d2-8359-63e2e3165164 container test-container: STEP: delete the pod Jan 4 14:20:11.142: INFO: Waiting for pod pod-9071b53a-4c3e-44d2-8359-63e2e3165164 to disappear Jan 4 14:20:11.151: INFO: Pod pod-9071b53a-4c3e-44d2-8359-63e2e3165164 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 14:20:11.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9221" for this suite. Jan 4 14:20:17.180: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 14:20:17.331: INFO: namespace emptydir-9221 deletion completed in 6.174929033s • [SLOW TEST:18.528 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 14:20:17.332: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Jan 4 14:20:17.547: INFO: Number of nodes with available pods: 0 Jan 4 14:20:17.547: INFO: Node iruya-node is running more than one daemon pod Jan 4 14:20:19.335: INFO: Number of nodes with available pods: 0 Jan 4 14:20:19.335: INFO: Node iruya-node is running more than one daemon pod Jan 4 14:20:19.872: INFO: Number of nodes with available pods: 0 Jan 4 14:20:19.872: INFO: Node iruya-node is running more than one daemon pod Jan 4 14:20:20.919: INFO: Number of nodes with available pods: 0 Jan 4 14:20:20.919: INFO: Node iruya-node is running more than one daemon pod Jan 4 14:20:21.566: INFO: Number of nodes with available pods: 0 Jan 4 14:20:21.566: INFO: Node iruya-node is running more than one daemon pod Jan 4 14:20:22.601: INFO: Number of nodes with available pods: 0 Jan 4 14:20:22.601: INFO: Node iruya-node is running more than one daemon pod Jan 4 14:20:24.105: INFO: Number of nodes with available pods: 0 Jan 4 14:20:24.106: INFO: Node iruya-node is running more than one daemon pod Jan 4 14:20:25.445: INFO: Number of nodes with available pods: 0 Jan 4 14:20:25.445: INFO: Node iruya-node is running more than one daemon pod Jan 4 14:20:25.900: INFO: Number of nodes with available pods: 0 Jan 4 14:20:25.900: INFO: Node iruya-node is running more than one daemon pod Jan 4 14:20:27.546: INFO: Number of nodes with available pods: 0 Jan 4 14:20:27.546: INFO: Node iruya-node is running more than one daemon pod Jan 4 14:20:27.970: INFO: Number of nodes with available pods: 0 Jan 4 14:20:27.970: INFO: Node iruya-node is running more than one daemon pod Jan 4 14:20:28.573: INFO: Number of nodes with available pods: 0 Jan 4 14:20:28.574: INFO: Node iruya-node is running more than one daemon pod Jan 4 14:20:29.565: INFO: Number of nodes with available pods: 1 Jan 4 14:20:29.565: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 4 14:20:30.568: INFO: Number of nodes with available pods: 2 Jan 4 14:20:30.568: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Jan 4 14:20:30.720: INFO: Number of nodes with available pods: 1 Jan 4 14:20:30.720: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 4 14:20:32.220: INFO: Number of nodes with available pods: 1 Jan 4 14:20:32.220: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 4 14:20:34.991: INFO: Number of nodes with available pods: 1 Jan 4 14:20:34.992: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 4 14:20:38.077: INFO: Number of nodes with available pods: 1 Jan 4 14:20:38.077: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 4 14:20:38.805: INFO: Number of nodes with available pods: 1 Jan 4 14:20:38.805: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 4 14:20:39.744: INFO: Number of nodes with available pods: 1 Jan 4 14:20:39.744: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 4 14:20:40.736: INFO: Number of nodes with available pods: 1 Jan 4 14:20:40.736: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 4 14:20:42.327: INFO: Number of nodes with available pods: 1 Jan 4 14:20:42.327: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 4 14:20:42.930: INFO: Number of nodes with available pods: 1 Jan 4 14:20:42.930: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 4 14:20:44.040: INFO: Number of nodes with available pods: 1 Jan 4 14:20:44.040: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 4 14:20:44.734: INFO: Number of nodes with available pods: 1 Jan 4 14:20:44.734: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 4 14:20:45.748: INFO: Number of nodes with available pods: 1 Jan 4 14:20:45.749: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 4 14:20:46.755: INFO: Number of nodes with available pods: 2 Jan 4 14:20:46.755: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-807, will wait for the garbage collector to delete the pods Jan 4 14:20:46.820: INFO: Deleting DaemonSet.extensions daemon-set took: 5.877303ms Jan 4 14:20:47.221: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.973019ms Jan 4 14:20:56.045: INFO: Number of nodes with available pods: 0 Jan 4 14:20:56.045: INFO: Number of running nodes: 0, number of available pods: 0 Jan 4 14:20:56.053: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-807/daemonsets","resourceVersion":"19276514"},"items":null} Jan 4 14:20:56.057: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-807/pods","resourceVersion":"19276514"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 14:20:56.069: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-807" for this suite. Jan 4 14:21:04.131: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 14:21:04.241: INFO: namespace daemonsets-807 deletion completed in 8.167203529s • [SLOW TEST:46.909 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 14:21:04.242: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 4 14:21:04.500: INFO: Waiting up to 5m0s for pod "downwardapi-volume-11024ea1-efa9-4709-b3d3-f6c112a233af" in namespace "downward-api-4" to be "success or failure" Jan 4 14:21:04.589: INFO: Pod "downwardapi-volume-11024ea1-efa9-4709-b3d3-f6c112a233af": Phase="Pending", Reason="", readiness=false. Elapsed: 89.514424ms Jan 4 14:21:06.606: INFO: Pod "downwardapi-volume-11024ea1-efa9-4709-b3d3-f6c112a233af": Phase="Pending", Reason="", readiness=false. Elapsed: 2.106155646s Jan 4 14:21:08.625: INFO: Pod "downwardapi-volume-11024ea1-efa9-4709-b3d3-f6c112a233af": Phase="Pending", Reason="", readiness=false. Elapsed: 4.1251268s Jan 4 14:21:10.639: INFO: Pod "downwardapi-volume-11024ea1-efa9-4709-b3d3-f6c112a233af": Phase="Pending", Reason="", readiness=false. Elapsed: 6.138826599s Jan 4 14:21:12.646: INFO: Pod "downwardapi-volume-11024ea1-efa9-4709-b3d3-f6c112a233af": Phase="Running", Reason="", readiness=true. Elapsed: 8.146342452s Jan 4 14:21:14.663: INFO: Pod "downwardapi-volume-11024ea1-efa9-4709-b3d3-f6c112a233af": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.162727361s STEP: Saw pod success Jan 4 14:21:14.663: INFO: Pod "downwardapi-volume-11024ea1-efa9-4709-b3d3-f6c112a233af" satisfied condition "success or failure" Jan 4 14:21:14.667: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-11024ea1-efa9-4709-b3d3-f6c112a233af container client-container: STEP: delete the pod Jan 4 14:21:14.731: INFO: Waiting for pod downwardapi-volume-11024ea1-efa9-4709-b3d3-f6c112a233af to disappear Jan 4 14:21:14.771: INFO: Pod downwardapi-volume-11024ea1-efa9-4709-b3d3-f6c112a233af no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 14:21:14.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4" for this suite. Jan 4 14:21:20.804: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 14:21:20.956: INFO: namespace downward-api-4 deletion completed in 6.176720966s • [SLOW TEST:16.714 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 14:21:20.957: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 pods, got 2 pods STEP: expected 0 rs, got 1 rs STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0104 14:21:25.613430 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 4 14:21:25.613: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 4 14:21:25.613: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2262" for this suite. Jan 4 14:21:31.932: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 4 14:21:32.353: INFO: namespace gc-2262 deletion completed in 6.735418666s • [SLOW TEST:11.397 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 4 14:21:32.354: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 4 14:21:32.537: INFO: (0) /api/v1/nodes/iruya-node/proxy/logs/:
alternatives.log
alternatives.l... (200; 99.743346ms)
Jan  4 14:21:32.546: INFO: (1) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.835605ms)
Jan  4 14:21:32.556: INFO: (2) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.768232ms)
Jan  4 14:21:32.584: INFO: (3) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 27.14223ms)
Jan  4 14:21:32.603: INFO: (4) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 19.057497ms)
Jan  4 14:21:32.612: INFO: (5) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.583947ms)
Jan  4 14:21:32.622: INFO: (6) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 10.059436ms)
Jan  4 14:21:32.630: INFO: (7) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.835461ms)
Jan  4 14:21:32.636: INFO: (8) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.728641ms)
Jan  4 14:21:32.644: INFO: (9) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.368522ms)
Jan  4 14:21:32.649: INFO: (10) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.621927ms)
Jan  4 14:21:32.654: INFO: (11) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.64298ms)
Jan  4 14:21:32.660: INFO: (12) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.949942ms)
Jan  4 14:21:32.667: INFO: (13) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.741619ms)
Jan  4 14:21:32.673: INFO: (14) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.716476ms)
Jan  4 14:21:32.680: INFO: (15) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.753664ms)
Jan  4 14:21:32.687: INFO: (16) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.423602ms)
Jan  4 14:21:32.694: INFO: (17) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.322633ms)
Jan  4 14:21:32.700: INFO: (18) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.743901ms)
Jan  4 14:21:32.705: INFO: (19) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.248216ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 14:21:32.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-7715" for this suite.
Jan  4 14:21:40.809: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 14:21:40.929: INFO: namespace proxy-7715 deletion completed in 8.21831802s

• [SLOW TEST:8.576 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy logs on node using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 14:21:40.930: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1721
[It] should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan  4 14:21:41.157: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=kubectl-8822'
Jan  4 14:21:41.379: INFO: stderr: ""
Jan  4 14:21:41.379: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod is running
STEP: verifying the pod e2e-test-nginx-pod was created
Jan  4 14:21:56.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=kubectl-8822 -o json'
Jan  4 14:21:56.582: INFO: stderr: ""
Jan  4 14:21:56.582: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2020-01-04T14:21:41Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-nginx-pod\"\n        },\n        \"name\": \"e2e-test-nginx-pod\",\n        \"namespace\": \"kubectl-8822\",\n        \"resourceVersion\": \"19276694\",\n        \"selfLink\": \"/api/v1/namespaces/kubectl-8822/pods/e2e-test-nginx-pod\",\n        \"uid\": \"cd7367db-d64d-4e4b-9db8-2fc900086eb7\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/nginx:1.14-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-nginx-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-v7fjv\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"iruya-node\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-v7fjv\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-v7fjv\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-04T14:21:41Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-04T14:21:51Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-04T14:21:51Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-04T14:21:41Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"docker://9e0764db9a8ce0c2735f1b87f182dcbf50d5e0d14b528aa629a3398776fdc8c0\",\n                \"image\": \"nginx:1.14-alpine\",\n                \"imageID\": \"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-nginx-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2020-01-04T14:21:50Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"10.96.3.65\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.44.0.1\",\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2020-01-04T14:21:41Z\"\n    }\n}\n"
STEP: replace the image in the pod
Jan  4 14:21:56.583: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-8822'
Jan  4 14:21:56.907: INFO: stderr: ""
Jan  4 14:21:56.907: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n"
STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29
[AfterEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1726
Jan  4 14:21:56.993: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-8822'
Jan  4 14:22:06.099: INFO: stderr: ""
Jan  4 14:22:06.100: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 14:22:06.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8822" for this suite.
Jan  4 14:22:12.142: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 14:22:12.251: INFO: namespace kubectl-8822 deletion completed in 6.139941636s

• [SLOW TEST:31.321 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should update a single-container pod's image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 14:22:12.252: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap configmap-7000/configmap-test-19611862-843e-4859-a4d6-29f54fd8eb70
STEP: Creating a pod to test consume configMaps
Jan  4 14:22:12.612: INFO: Waiting up to 5m0s for pod "pod-configmaps-741c268a-8595-4be3-b3ec-2565f8f615d6" in namespace "configmap-7000" to be "success or failure"
Jan  4 14:22:12.632: INFO: Pod "pod-configmaps-741c268a-8595-4be3-b3ec-2565f8f615d6": Phase="Pending", Reason="", readiness=false. Elapsed: 19.519977ms
Jan  4 14:22:14.637: INFO: Pod "pod-configmaps-741c268a-8595-4be3-b3ec-2565f8f615d6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02501466s
Jan  4 14:22:16.644: INFO: Pod "pod-configmaps-741c268a-8595-4be3-b3ec-2565f8f615d6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031308436s
Jan  4 14:22:18.660: INFO: Pod "pod-configmaps-741c268a-8595-4be3-b3ec-2565f8f615d6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.048023326s
Jan  4 14:22:20.668: INFO: Pod "pod-configmaps-741c268a-8595-4be3-b3ec-2565f8f615d6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.055185796s
Jan  4 14:22:22.676: INFO: Pod "pod-configmaps-741c268a-8595-4be3-b3ec-2565f8f615d6": Phase="Pending", Reason="", readiness=false. Elapsed: 10.063507901s
Jan  4 14:22:24.686: INFO: Pod "pod-configmaps-741c268a-8595-4be3-b3ec-2565f8f615d6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.073582918s
STEP: Saw pod success
Jan  4 14:22:24.686: INFO: Pod "pod-configmaps-741c268a-8595-4be3-b3ec-2565f8f615d6" satisfied condition "success or failure"
Jan  4 14:22:24.690: INFO: Trying to get logs from node iruya-node pod pod-configmaps-741c268a-8595-4be3-b3ec-2565f8f615d6 container env-test: 
STEP: delete the pod
Jan  4 14:22:24.748: INFO: Waiting for pod pod-configmaps-741c268a-8595-4be3-b3ec-2565f8f615d6 to disappear
Jan  4 14:22:24.787: INFO: Pod pod-configmaps-741c268a-8595-4be3-b3ec-2565f8f615d6 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 14:22:24.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7000" for this suite.
Jan  4 14:22:30.816: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 14:22:30.955: INFO: namespace configmap-7000 deletion completed in 6.162726154s

• [SLOW TEST:18.703 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 14:22:30.956: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan  4 14:22:31.060: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Jan  4 14:22:31.084: INFO: Pod name sample-pod: Found 0 pods out of 1
Jan  4 14:22:36.107: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jan  4 14:22:46.126: INFO: Creating deployment "test-rolling-update-deployment"
Jan  4 14:22:46.135: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Jan  4 14:22:46.159: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Jan  4 14:22:48.172: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Jan  4 14:22:48.175: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744566, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744566, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744566, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744566, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  4 14:22:50.231: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744566, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744566, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744566, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744566, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  4 14:22:52.199: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744566, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744566, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744566, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744566, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  4 14:22:54.181: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744566, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744566, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744566, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713744566, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  4 14:22:56.181: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Jan  4 14:22:56.197: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:deployment-1787,SelfLink:/apis/apps/v1/namespaces/deployment-1787/deployments/test-rolling-update-deployment,UID:96d642eb-79e9-4e36-a065-c8eb73cc47af,ResourceVersion:19276870,Generation:1,CreationTimestamp:2020-01-04 14:22:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-01-04 14:22:46 +0000 UTC 2020-01-04 14:22:46 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-01-04 14:22:54 +0000 UTC 2020-01-04 14:22:46 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-79f6b9d75c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Jan  4 14:22:56.203: INFO: New ReplicaSet "test-rolling-update-deployment-79f6b9d75c" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c,GenerateName:,Namespace:deployment-1787,SelfLink:/apis/apps/v1/namespaces/deployment-1787/replicasets/test-rolling-update-deployment-79f6b9d75c,UID:38ebd5b6-7bd2-465b-8128-7ef2c5dd81e7,ResourceVersion:19276859,Generation:1,CreationTimestamp:2020-01-04 14:22:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 96d642eb-79e9-4e36-a065-c8eb73cc47af 0xc001a55d37 0xc001a55d38}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Jan  4 14:22:56.203: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Jan  4 14:22:56.203: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:deployment-1787,SelfLink:/apis/apps/v1/namespaces/deployment-1787/replicasets/test-rolling-update-controller,UID:65cb323f-2b43-4682-a0df-1902a8e2fff7,ResourceVersion:19276868,Generation:2,CreationTimestamp:2020-01-04 14:22:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 96d642eb-79e9-4e36-a065-c8eb73cc47af 0xc001a55c4f 0xc001a55c60}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan  4 14:22:56.207: INFO: Pod "test-rolling-update-deployment-79f6b9d75c-6kwzr" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c-6kwzr,GenerateName:test-rolling-update-deployment-79f6b9d75c-,Namespace:deployment-1787,SelfLink:/api/v1/namespaces/deployment-1787/pods/test-rolling-update-deployment-79f6b9d75c-6kwzr,UID:f4834c5c-bc97-4a8c-86dd-1597448682b4,ResourceVersion:19276858,Generation:0,CreationTimestamp:2020-01-04 14:22:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-79f6b9d75c 38ebd5b6-7bd2-465b-8128-7ef2c5dd81e7 0xc0029015f7 0xc0029015f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-lw48b {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lw48b,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-lw48b true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002901670} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002901690}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:22:46 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:22:54 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:22:54 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:22:46 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2020-01-04 14:22:46 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-01-04 14:22:54 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://ffc8c86551c15c3820fa3d38984d93bfaad49bced233d42a4893fe5fbdf6a2d4}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 14:22:56.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-1787" for this suite.
Jan  4 14:23:02.240: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 14:23:02.457: INFO: namespace deployment-1787 deletion completed in 6.245036214s

• [SLOW TEST:31.502 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 14:23:02.459: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-f6a9a786-4a50-437d-91db-f55d9b3638ec
STEP: Creating a pod to test consume secrets
Jan  4 14:23:02.848: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-dc0a9c92-9b91-47ee-92f2-ea422b518072" in namespace "projected-8267" to be "success or failure"
Jan  4 14:23:03.133: INFO: Pod "pod-projected-secrets-dc0a9c92-9b91-47ee-92f2-ea422b518072": Phase="Pending", Reason="", readiness=false. Elapsed: 284.613648ms
Jan  4 14:23:05.150: INFO: Pod "pod-projected-secrets-dc0a9c92-9b91-47ee-92f2-ea422b518072": Phase="Pending", Reason="", readiness=false. Elapsed: 2.301268741s
Jan  4 14:23:07.156: INFO: Pod "pod-projected-secrets-dc0a9c92-9b91-47ee-92f2-ea422b518072": Phase="Pending", Reason="", readiness=false. Elapsed: 4.307562651s
Jan  4 14:23:09.163: INFO: Pod "pod-projected-secrets-dc0a9c92-9b91-47ee-92f2-ea422b518072": Phase="Pending", Reason="", readiness=false. Elapsed: 6.315125003s
Jan  4 14:23:11.174: INFO: Pod "pod-projected-secrets-dc0a9c92-9b91-47ee-92f2-ea422b518072": Phase="Pending", Reason="", readiness=false. Elapsed: 8.32519179s
Jan  4 14:23:13.181: INFO: Pod "pod-projected-secrets-dc0a9c92-9b91-47ee-92f2-ea422b518072": Phase="Pending", Reason="", readiness=false. Elapsed: 10.332880824s
Jan  4 14:23:15.194: INFO: Pod "pod-projected-secrets-dc0a9c92-9b91-47ee-92f2-ea422b518072": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.345940569s
STEP: Saw pod success
Jan  4 14:23:15.194: INFO: Pod "pod-projected-secrets-dc0a9c92-9b91-47ee-92f2-ea422b518072" satisfied condition "success or failure"
Jan  4 14:23:15.198: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-dc0a9c92-9b91-47ee-92f2-ea422b518072 container projected-secret-volume-test: 
STEP: delete the pod
Jan  4 14:23:15.239: INFO: Waiting for pod pod-projected-secrets-dc0a9c92-9b91-47ee-92f2-ea422b518072 to disappear
Jan  4 14:23:15.353: INFO: Pod pod-projected-secrets-dc0a9c92-9b91-47ee-92f2-ea422b518072 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 14:23:15.353: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8267" for this suite.
Jan  4 14:23:21.614: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 14:23:21.748: INFO: namespace projected-8267 deletion completed in 6.385060906s

• [SLOW TEST:19.289 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 14:23:21.748: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating pod
Jan  4 14:23:34.040: INFO: Pod pod-hostip-4548fa28-0b17-4a5b-8cb5-3f65621bf5b4 has hostIP: 10.96.3.65
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 14:23:34.040: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-6766" for this suite.
Jan  4 14:23:56.065: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 14:23:56.162: INFO: namespace pods-6766 deletion completed in 22.117859621s

• [SLOW TEST:34.414 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 14:23:56.163: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-map-b94d55a9-c818-4ea8-8f88-7aec2497d4a4
STEP: Creating a pod to test consume secrets
Jan  4 14:23:56.371: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-8a20322c-79c4-4ea4-915f-be8d16058f6e" in namespace "projected-5714" to be "success or failure"
Jan  4 14:23:56.432: INFO: Pod "pod-projected-secrets-8a20322c-79c4-4ea4-915f-be8d16058f6e": Phase="Pending", Reason="", readiness=false. Elapsed: 60.904077ms
Jan  4 14:23:58.442: INFO: Pod "pod-projected-secrets-8a20322c-79c4-4ea4-915f-be8d16058f6e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070850361s
Jan  4 14:24:00.452: INFO: Pod "pod-projected-secrets-8a20322c-79c4-4ea4-915f-be8d16058f6e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.08052868s
Jan  4 14:24:02.466: INFO: Pod "pod-projected-secrets-8a20322c-79c4-4ea4-915f-be8d16058f6e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.094762151s
Jan  4 14:24:04.478: INFO: Pod "pod-projected-secrets-8a20322c-79c4-4ea4-915f-be8d16058f6e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.107436467s
Jan  4 14:24:06.648: INFO: Pod "pod-projected-secrets-8a20322c-79c4-4ea4-915f-be8d16058f6e": Phase="Pending", Reason="", readiness=false. Elapsed: 10.276469427s
Jan  4 14:24:08.653: INFO: Pod "pod-projected-secrets-8a20322c-79c4-4ea4-915f-be8d16058f6e": Phase="Pending", Reason="", readiness=false. Elapsed: 12.282436096s
Jan  4 14:24:10.658: INFO: Pod "pod-projected-secrets-8a20322c-79c4-4ea4-915f-be8d16058f6e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.286758086s
STEP: Saw pod success
Jan  4 14:24:10.658: INFO: Pod "pod-projected-secrets-8a20322c-79c4-4ea4-915f-be8d16058f6e" satisfied condition "success or failure"
Jan  4 14:24:10.661: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-8a20322c-79c4-4ea4-915f-be8d16058f6e container projected-secret-volume-test: 
STEP: delete the pod
Jan  4 14:24:10.777: INFO: Waiting for pod pod-projected-secrets-8a20322c-79c4-4ea4-915f-be8d16058f6e to disappear
Jan  4 14:24:10.948: INFO: Pod pod-projected-secrets-8a20322c-79c4-4ea4-915f-be8d16058f6e no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 14:24:10.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5714" for this suite.
Jan  4 14:24:17.120: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 14:24:17.215: INFO: namespace projected-5714 deletion completed in 6.257842842s

• [SLOW TEST:21.052 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 14:24:17.215: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Jan  4 14:24:26.078: INFO: 10 pods remaining
Jan  4 14:24:26.078: INFO: 4 pods has nil DeletionTimestamp
Jan  4 14:24:26.078: INFO: 
Jan  4 14:24:26.564: INFO: 0 pods remaining
Jan  4 14:24:26.564: INFO: 0 pods has nil DeletionTimestamp
Jan  4 14:24:26.564: INFO: 
STEP: Gathering metrics
W0104 14:24:27.188676       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan  4 14:24:27.188: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 14:24:27.188: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-4812" for this suite.
Jan  4 14:24:41.478: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 14:24:41.611: INFO: namespace gc-4812 deletion completed in 14.419019173s

• [SLOW TEST:24.396 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 14:24:41.611: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-c75435ab-e48c-4a10-a309-16bf89cba109
STEP: Creating a pod to test consume configMaps
Jan  4 14:24:41.908: INFO: Waiting up to 5m0s for pod "pod-configmaps-9a296604-482d-426c-b18e-b3f2e0de4004" in namespace "configmap-9959" to be "success or failure"
Jan  4 14:24:42.015: INFO: Pod "pod-configmaps-9a296604-482d-426c-b18e-b3f2e0de4004": Phase="Pending", Reason="", readiness=false. Elapsed: 106.672503ms
Jan  4 14:24:44.033: INFO: Pod "pod-configmaps-9a296604-482d-426c-b18e-b3f2e0de4004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.124459168s
Jan  4 14:24:46.041: INFO: Pod "pod-configmaps-9a296604-482d-426c-b18e-b3f2e0de4004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.132633716s
Jan  4 14:24:48.055: INFO: Pod "pod-configmaps-9a296604-482d-426c-b18e-b3f2e0de4004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.147080647s
Jan  4 14:24:50.067: INFO: Pod "pod-configmaps-9a296604-482d-426c-b18e-b3f2e0de4004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.158759762s
Jan  4 14:24:52.079: INFO: Pod "pod-configmaps-9a296604-482d-426c-b18e-b3f2e0de4004": Phase="Pending", Reason="", readiness=false. Elapsed: 10.170865166s
Jan  4 14:24:54.091: INFO: Pod "pod-configmaps-9a296604-482d-426c-b18e-b3f2e0de4004": Phase="Pending", Reason="", readiness=false. Elapsed: 12.182384313s
Jan  4 14:24:56.102: INFO: Pod "pod-configmaps-9a296604-482d-426c-b18e-b3f2e0de4004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.193536529s
STEP: Saw pod success
Jan  4 14:24:56.102: INFO: Pod "pod-configmaps-9a296604-482d-426c-b18e-b3f2e0de4004" satisfied condition "success or failure"
Jan  4 14:24:56.107: INFO: Trying to get logs from node iruya-node pod pod-configmaps-9a296604-482d-426c-b18e-b3f2e0de4004 container configmap-volume-test: 
STEP: delete the pod
Jan  4 14:24:56.174: INFO: Waiting for pod pod-configmaps-9a296604-482d-426c-b18e-b3f2e0de4004 to disappear
Jan  4 14:24:56.178: INFO: Pod pod-configmaps-9a296604-482d-426c-b18e-b3f2e0de4004 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 14:24:56.178: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9959" for this suite.
Jan  4 14:25:02.210: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 14:25:02.396: INFO: namespace configmap-9959 deletion completed in 6.211435524s

• [SLOW TEST:20.784 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 14:25:02.396: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: getting the auto-created API token
STEP: reading a file in the container
Jan  4 14:25:13.058: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-6917 pod-service-account-8a917334-7b73-4c26-9f86-64ccd5c8a780 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token'
STEP: reading a file in the container
Jan  4 14:25:15.594: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-6917 pod-service-account-8a917334-7b73-4c26-9f86-64ccd5c8a780 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt'
STEP: reading a file in the container
Jan  4 14:25:16.101: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-6917 pod-service-account-8a917334-7b73-4c26-9f86-64ccd5c8a780 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace'
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 14:25:16.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-6917" for this suite.
Jan  4 14:25:22.702: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 14:25:22.845: INFO: namespace svcaccounts-6917 deletion completed in 6.177138231s

• [SLOW TEST:20.449 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 14:25:22.845: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-3938
[It] Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating stateful set ss in namespace statefulset-3938
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-3938
Jan  4 14:25:23.121: INFO: Found 0 stateful pods, waiting for 1
Jan  4 14:25:33.134: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false
Jan  4 14:25:43.155: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Jan  4 14:25:43.159: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3938 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan  4 14:25:43.894: INFO: stderr: "I0104 14:25:43.392600     391 log.go:172] (0xc000118bb0) (0xc000792000) Create stream\nI0104 14:25:43.392701     391 log.go:172] (0xc000118bb0) (0xc000792000) Stream added, broadcasting: 1\nI0104 14:25:43.398791     391 log.go:172] (0xc000118bb0) Reply frame received for 1\nI0104 14:25:43.398881     391 log.go:172] (0xc000118bb0) (0xc00027e000) Create stream\nI0104 14:25:43.398891     391 log.go:172] (0xc000118bb0) (0xc00027e000) Stream added, broadcasting: 3\nI0104 14:25:43.399876     391 log.go:172] (0xc000118bb0) Reply frame received for 3\nI0104 14:25:43.399910     391 log.go:172] (0xc000118bb0) (0xc000792140) Create stream\nI0104 14:25:43.399927     391 log.go:172] (0xc000118bb0) (0xc000792140) Stream added, broadcasting: 5\nI0104 14:25:43.404488     391 log.go:172] (0xc000118bb0) Reply frame received for 5\nI0104 14:25:43.580421     391 log.go:172] (0xc000118bb0) Data frame received for 5\nI0104 14:25:43.580486     391 log.go:172] (0xc000792140) (5) Data frame handling\nI0104 14:25:43.580506     391 log.go:172] (0xc000792140) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0104 14:25:43.633181     391 log.go:172] (0xc000118bb0) Data frame received for 3\nI0104 14:25:43.633379     391 log.go:172] (0xc00027e000) (3) Data frame handling\nI0104 14:25:43.633429     391 log.go:172] (0xc00027e000) (3) Data frame sent\nI0104 14:25:43.876871     391 log.go:172] (0xc000118bb0) (0xc00027e000) Stream removed, broadcasting: 3\nI0104 14:25:43.877557     391 log.go:172] (0xc000118bb0) Data frame received for 1\nI0104 14:25:43.877788     391 log.go:172] (0xc000118bb0) (0xc000792140) Stream removed, broadcasting: 5\nI0104 14:25:43.877898     391 log.go:172] (0xc000792000) (1) Data frame handling\nI0104 14:25:43.877942     391 log.go:172] (0xc000792000) (1) Data frame sent\nI0104 14:25:43.877961     391 log.go:172] (0xc000118bb0) (0xc000792000) Stream removed, broadcasting: 1\nI0104 14:25:43.878000     391 log.go:172] (0xc000118bb0) Go away received\nI0104 14:25:43.879241     391 log.go:172] (0xc000118bb0) (0xc000792000) Stream removed, broadcasting: 1\nI0104 14:25:43.879258     391 log.go:172] (0xc000118bb0) (0xc00027e000) Stream removed, broadcasting: 3\nI0104 14:25:43.879266     391 log.go:172] (0xc000118bb0) (0xc000792140) Stream removed, broadcasting: 5\n"
Jan  4 14:25:43.894: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan  4 14:25:43.894: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan  4 14:25:43.903: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Jan  4 14:25:53.924: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan  4 14:25:53.925: INFO: Waiting for statefulset status.replicas updated to 0
Jan  4 14:25:53.959: INFO: POD   NODE        PHASE    GRACE  CONDITIONS
Jan  4 14:25:53.959: INFO: ss-0  iruya-node  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:23 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:23 +0000 UTC  }]
Jan  4 14:25:53.959: INFO: 
Jan  4 14:25:53.959: INFO: StatefulSet ss has not reached scale 3, at 1
Jan  4 14:25:54.969: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.990512186s
Jan  4 14:25:56.168: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.979991995s
Jan  4 14:25:57.785: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.781329986s
Jan  4 14:25:58.807: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.163953684s
Jan  4 14:25:59.993: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.14197869s
Jan  4 14:26:01.739: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.95624498s
Jan  4 14:26:02.897: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.210134207s
Jan  4 14:26:04.335: INFO: Verifying statefulset ss doesn't scale past 3 for another 51.603287ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-3938
Jan  4 14:26:05.343: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3938 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  4 14:26:06.152: INFO: stderr: "I0104 14:26:05.671863     411 log.go:172] (0xc0009be420) (0xc000a0c640) Create stream\nI0104 14:26:05.672328     411 log.go:172] (0xc0009be420) (0xc000a0c640) Stream added, broadcasting: 1\nI0104 14:26:05.692421     411 log.go:172] (0xc0009be420) Reply frame received for 1\nI0104 14:26:05.692483     411 log.go:172] (0xc0009be420) (0xc000870000) Create stream\nI0104 14:26:05.692504     411 log.go:172] (0xc0009be420) (0xc000870000) Stream added, broadcasting: 3\nI0104 14:26:05.694140     411 log.go:172] (0xc0009be420) Reply frame received for 3\nI0104 14:26:05.694164     411 log.go:172] (0xc0009be420) (0xc000a0c6e0) Create stream\nI0104 14:26:05.694177     411 log.go:172] (0xc0009be420) (0xc000a0c6e0) Stream added, broadcasting: 5\nI0104 14:26:05.695878     411 log.go:172] (0xc0009be420) Reply frame received for 5\nI0104 14:26:05.885817     411 log.go:172] (0xc0009be420) Data frame received for 3\nI0104 14:26:05.886752     411 log.go:172] (0xc0009be420) Data frame received for 5\nI0104 14:26:05.886930     411 log.go:172] (0xc000a0c6e0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0104 14:26:05.887037     411 log.go:172] (0xc000870000) (3) Data frame handling\nI0104 14:26:05.887072     411 log.go:172] (0xc000870000) (3) Data frame sent\nI0104 14:26:05.887160     411 log.go:172] (0xc000a0c6e0) (5) Data frame sent\nI0104 14:26:06.130299     411 log.go:172] (0xc0009be420) Data frame received for 1\nI0104 14:26:06.130623     411 log.go:172] (0xc0009be420) (0xc000870000) Stream removed, broadcasting: 3\nI0104 14:26:06.130799     411 log.go:172] (0xc000a0c640) (1) Data frame handling\nI0104 14:26:06.130836     411 log.go:172] (0xc000a0c640) (1) Data frame sent\nI0104 14:26:06.130854     411 log.go:172] (0xc0009be420) (0xc000a0c640) Stream removed, broadcasting: 1\nI0104 14:26:06.131314     411 log.go:172] (0xc0009be420) (0xc000a0c6e0) Stream removed, broadcasting: 5\nI0104 14:26:06.132137     411 log.go:172] (0xc0009be420) Go away received\nI0104 14:26:06.132326     411 log.go:172] (0xc0009be420) (0xc000a0c640) Stream removed, broadcasting: 1\nI0104 14:26:06.132388     411 log.go:172] (0xc0009be420) (0xc000870000) Stream removed, broadcasting: 3\nI0104 14:26:06.132420     411 log.go:172] (0xc0009be420) (0xc000a0c6e0) Stream removed, broadcasting: 5\n"
Jan  4 14:26:06.153: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan  4 14:26:06.153: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan  4 14:26:06.153: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3938 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  4 14:26:07.002: INFO: stderr: "I0104 14:26:06.402009     431 log.go:172] (0xc00073a420) (0xc0007b80a0) Create stream\nI0104 14:26:06.402975     431 log.go:172] (0xc00073a420) (0xc0007b80a0) Stream added, broadcasting: 1\nI0104 14:26:06.414960     431 log.go:172] (0xc00073a420) Reply frame received for 1\nI0104 14:26:06.415058     431 log.go:172] (0xc00073a420) (0xc00003b9a0) Create stream\nI0104 14:26:06.415074     431 log.go:172] (0xc00073a420) (0xc00003b9a0) Stream added, broadcasting: 3\nI0104 14:26:06.417378     431 log.go:172] (0xc00073a420) Reply frame received for 3\nI0104 14:26:06.417435     431 log.go:172] (0xc00073a420) (0xc0002a2000) Create stream\nI0104 14:26:06.417450     431 log.go:172] (0xc00073a420) (0xc0002a2000) Stream added, broadcasting: 5\nI0104 14:26:06.420281     431 log.go:172] (0xc00073a420) Reply frame received for 5\nI0104 14:26:06.861621     431 log.go:172] (0xc00073a420) Data frame received for 5\nI0104 14:26:06.861679     431 log.go:172] (0xc0002a2000) (5) Data frame handling\nI0104 14:26:06.861707     431 log.go:172] (0xc0002a2000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0104 14:26:06.935724     431 log.go:172] (0xc00073a420) Data frame received for 3\nI0104 14:26:06.935746     431 log.go:172] (0xc00003b9a0) (3) Data frame handling\nI0104 14:26:06.935752     431 log.go:172] (0xc00003b9a0) (3) Data frame sent\nI0104 14:26:06.935774     431 log.go:172] (0xc00073a420) Data frame received for 5\nI0104 14:26:06.935779     431 log.go:172] (0xc0002a2000) (5) Data frame handling\nI0104 14:26:06.935783     431 log.go:172] (0xc0002a2000) (5) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0104 14:26:06.998653     431 log.go:172] (0xc00073a420) Data frame received for 1\nI0104 14:26:06.998675     431 log.go:172] (0xc0007b80a0) (1) Data frame handling\nI0104 14:26:06.998692     431 log.go:172] (0xc0007b80a0) (1) Data frame sent\nI0104 14:26:06.998705     431 log.go:172] (0xc00073a420) (0xc0007b80a0) Stream removed, broadcasting: 1\nI0104 14:26:06.998754     431 log.go:172] (0xc00073a420) (0xc0002a2000) Stream removed, broadcasting: 5\nI0104 14:26:06.998811     431 log.go:172] (0xc00073a420) (0xc00003b9a0) Stream removed, broadcasting: 3\nI0104 14:26:06.998834     431 log.go:172] (0xc00073a420) Go away received\nI0104 14:26:06.999072     431 log.go:172] (0xc00073a420) (0xc0007b80a0) Stream removed, broadcasting: 1\nI0104 14:26:06.999088     431 log.go:172] (0xc00073a420) (0xc00003b9a0) Stream removed, broadcasting: 3\nI0104 14:26:06.999096     431 log.go:172] (0xc00073a420) (0xc0002a2000) Stream removed, broadcasting: 5\n"
Jan  4 14:26:07.002: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan  4 14:26:07.002: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan  4 14:26:07.003: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3938 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  4 14:26:07.579: INFO: stderr: "I0104 14:26:07.171712     444 log.go:172] (0xc000104dc0) (0xc00096c640) Create stream\nI0104 14:26:07.172114     444 log.go:172] (0xc000104dc0) (0xc00096c640) Stream added, broadcasting: 1\nI0104 14:26:07.178196     444 log.go:172] (0xc000104dc0) Reply frame received for 1\nI0104 14:26:07.178240     444 log.go:172] (0xc000104dc0) (0xc000a38000) Create stream\nI0104 14:26:07.178264     444 log.go:172] (0xc000104dc0) (0xc000a38000) Stream added, broadcasting: 3\nI0104 14:26:07.179538     444 log.go:172] (0xc000104dc0) Reply frame received for 3\nI0104 14:26:07.179571     444 log.go:172] (0xc000104dc0) (0xc00096c6e0) Create stream\nI0104 14:26:07.179582     444 log.go:172] (0xc000104dc0) (0xc00096c6e0) Stream added, broadcasting: 5\nI0104 14:26:07.181058     444 log.go:172] (0xc000104dc0) Reply frame received for 5\nI0104 14:26:07.295340     444 log.go:172] (0xc000104dc0) Data frame received for 3\nI0104 14:26:07.295477     444 log.go:172] (0xc000a38000) (3) Data frame handling\nI0104 14:26:07.295517     444 log.go:172] (0xc000a38000) (3) Data frame sent\nI0104 14:26:07.295676     444 log.go:172] (0xc000104dc0) Data frame received for 5\nI0104 14:26:07.295702     444 log.go:172] (0xc00096c6e0) (5) Data frame handling\nI0104 14:26:07.295747     444 log.go:172] (0xc00096c6e0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0104 14:26:07.561673     444 log.go:172] (0xc000104dc0) (0xc000a38000) Stream removed, broadcasting: 3\nI0104 14:26:07.561834     444 log.go:172] (0xc000104dc0) Data frame received for 1\nI0104 14:26:07.561880     444 log.go:172] (0xc000104dc0) (0xc00096c6e0) Stream removed, broadcasting: 5\nI0104 14:26:07.561935     444 log.go:172] (0xc00096c640) (1) Data frame handling\nI0104 14:26:07.561955     444 log.go:172] (0xc00096c640) (1) Data frame sent\nI0104 14:26:07.561967     444 log.go:172] (0xc000104dc0) (0xc00096c640) Stream removed, broadcasting: 1\nI0104 14:26:07.561995     444 log.go:172] (0xc000104dc0) Go away received\nI0104 14:26:07.563569     444 log.go:172] (0xc000104dc0) (0xc00096c640) Stream removed, broadcasting: 1\nI0104 14:26:07.563679     444 log.go:172] (0xc000104dc0) (0xc000a38000) Stream removed, broadcasting: 3\nI0104 14:26:07.563688     444 log.go:172] (0xc000104dc0) (0xc00096c6e0) Stream removed, broadcasting: 5\n"
Jan  4 14:26:07.580: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan  4 14:26:07.580: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan  4 14:26:07.592: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jan  4 14:26:07.592: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jan  4 14:26:07.592: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Jan  4 14:26:07.596: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3938 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan  4 14:26:08.144: INFO: stderr: "I0104 14:26:07.756571     464 log.go:172] (0xc000a66580) (0xc000688aa0) Create stream\nI0104 14:26:07.756884     464 log.go:172] (0xc000a66580) (0xc000688aa0) Stream added, broadcasting: 1\nI0104 14:26:07.769183     464 log.go:172] (0xc000a66580) Reply frame received for 1\nI0104 14:26:07.769229     464 log.go:172] (0xc000a66580) (0xc0009dc000) Create stream\nI0104 14:26:07.769248     464 log.go:172] (0xc000a66580) (0xc0009dc000) Stream added, broadcasting: 3\nI0104 14:26:07.770435     464 log.go:172] (0xc000a66580) Reply frame received for 3\nI0104 14:26:07.770460     464 log.go:172] (0xc000a66580) (0xc0009dc0a0) Create stream\nI0104 14:26:07.770473     464 log.go:172] (0xc000a66580) (0xc0009dc0a0) Stream added, broadcasting: 5\nI0104 14:26:07.772146     464 log.go:172] (0xc000a66580) Reply frame received for 5\nI0104 14:26:07.891304     464 log.go:172] (0xc000a66580) Data frame received for 3\nI0104 14:26:07.891445     464 log.go:172] (0xc0009dc000) (3) Data frame handling\nI0104 14:26:07.891478     464 log.go:172] (0xc0009dc000) (3) Data frame sent\nI0104 14:26:07.891598     464 log.go:172] (0xc000a66580) Data frame received for 5\nI0104 14:26:07.891614     464 log.go:172] (0xc0009dc0a0) (5) Data frame handling\nI0104 14:26:07.891637     464 log.go:172] (0xc0009dc0a0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0104 14:26:08.134623     464 log.go:172] (0xc000a66580) (0xc0009dc000) Stream removed, broadcasting: 3\nI0104 14:26:08.134964     464 log.go:172] (0xc000a66580) Data frame received for 1\nI0104 14:26:08.135049     464 log.go:172] (0xc000688aa0) (1) Data frame handling\nI0104 14:26:08.135095     464 log.go:172] (0xc000688aa0) (1) Data frame sent\nI0104 14:26:08.135148     464 log.go:172] (0xc000a66580) (0xc000688aa0) Stream removed, broadcasting: 1\nI0104 14:26:08.135608     464 log.go:172] (0xc000a66580) (0xc0009dc0a0) Stream removed, broadcasting: 5\nI0104 14:26:08.137161     464 log.go:172] (0xc000a66580) Go away received\nI0104 14:26:08.137494     464 log.go:172] (0xc000a66580) (0xc000688aa0) Stream removed, broadcasting: 1\nI0104 14:26:08.137833     464 log.go:172] (0xc000a66580) (0xc0009dc000) Stream removed, broadcasting: 3\nI0104 14:26:08.137924     464 log.go:172] (0xc000a66580) (0xc0009dc0a0) Stream removed, broadcasting: 5\n"
Jan  4 14:26:08.145: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan  4 14:26:08.145: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan  4 14:26:08.146: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3938 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan  4 14:26:08.620: INFO: stderr: "I0104 14:26:08.279103     484 log.go:172] (0xc0009c4420) (0xc00040e820) Create stream\nI0104 14:26:08.279210     484 log.go:172] (0xc0009c4420) (0xc00040e820) Stream added, broadcasting: 1\nI0104 14:26:08.283907     484 log.go:172] (0xc0009c4420) Reply frame received for 1\nI0104 14:26:08.284030     484 log.go:172] (0xc0009c4420) (0xc000828000) Create stream\nI0104 14:26:08.284071     484 log.go:172] (0xc0009c4420) (0xc000828000) Stream added, broadcasting: 3\nI0104 14:26:08.289750     484 log.go:172] (0xc0009c4420) Reply frame received for 3\nI0104 14:26:08.289876     484 log.go:172] (0xc0009c4420) (0xc00066c320) Create stream\nI0104 14:26:08.289909     484 log.go:172] (0xc0009c4420) (0xc00066c320) Stream added, broadcasting: 5\nI0104 14:26:08.292196     484 log.go:172] (0xc0009c4420) Reply frame received for 5\nI0104 14:26:08.384634     484 log.go:172] (0xc0009c4420) Data frame received for 5\nI0104 14:26:08.384707     484 log.go:172] (0xc00066c320) (5) Data frame handling\nI0104 14:26:08.384755     484 log.go:172] (0xc00066c320) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0104 14:26:08.481202     484 log.go:172] (0xc0009c4420) Data frame received for 3\nI0104 14:26:08.481264     484 log.go:172] (0xc000828000) (3) Data frame handling\nI0104 14:26:08.481296     484 log.go:172] (0xc000828000) (3) Data frame sent\nI0104 14:26:08.605631     484 log.go:172] (0xc0009c4420) (0xc000828000) Stream removed, broadcasting: 3\nI0104 14:26:08.605799     484 log.go:172] (0xc0009c4420) Data frame received for 1\nI0104 14:26:08.605944     484 log.go:172] (0xc0009c4420) (0xc00066c320) Stream removed, broadcasting: 5\nI0104 14:26:08.606021     484 log.go:172] (0xc00040e820) (1) Data frame handling\nI0104 14:26:08.606075     484 log.go:172] (0xc00040e820) (1) Data frame sent\nI0104 14:26:08.606117     484 log.go:172] (0xc0009c4420) (0xc00040e820) Stream removed, broadcasting: 1\nI0104 14:26:08.606159     484 log.go:172] (0xc0009c4420) Go away received\nI0104 14:26:08.607087     484 log.go:172] (0xc0009c4420) (0xc00040e820) Stream removed, broadcasting: 1\nI0104 14:26:08.607147     484 log.go:172] (0xc0009c4420) (0xc000828000) Stream removed, broadcasting: 3\nI0104 14:26:08.607158     484 log.go:172] (0xc0009c4420) (0xc00066c320) Stream removed, broadcasting: 5\n"
Jan  4 14:26:08.620: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan  4 14:26:08.620: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan  4 14:26:08.620: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3938 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan  4 14:26:09.094: INFO: stderr: "I0104 14:26:08.768087     505 log.go:172] (0xc000a78420) (0xc0009f2780) Create stream\nI0104 14:26:08.768240     505 log.go:172] (0xc000a78420) (0xc0009f2780) Stream added, broadcasting: 1\nI0104 14:26:08.772800     505 log.go:172] (0xc000a78420) Reply frame received for 1\nI0104 14:26:08.772830     505 log.go:172] (0xc000a78420) (0xc000948000) Create stream\nI0104 14:26:08.772838     505 log.go:172] (0xc000a78420) (0xc000948000) Stream added, broadcasting: 3\nI0104 14:26:08.773849     505 log.go:172] (0xc000a78420) Reply frame received for 3\nI0104 14:26:08.773866     505 log.go:172] (0xc000a78420) (0xc0009480a0) Create stream\nI0104 14:26:08.773870     505 log.go:172] (0xc000a78420) (0xc0009480a0) Stream added, broadcasting: 5\nI0104 14:26:08.775106     505 log.go:172] (0xc000a78420) Reply frame received for 5\nI0104 14:26:08.870893     505 log.go:172] (0xc000a78420) Data frame received for 5\nI0104 14:26:08.870957     505 log.go:172] (0xc0009480a0) (5) Data frame handling\nI0104 14:26:08.870986     505 log.go:172] (0xc0009480a0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0104 14:26:08.906349     505 log.go:172] (0xc000a78420) Data frame received for 3\nI0104 14:26:08.906370     505 log.go:172] (0xc000948000) (3) Data frame handling\nI0104 14:26:08.906379     505 log.go:172] (0xc000948000) (3) Data frame sent\nI0104 14:26:09.084505     505 log.go:172] (0xc000a78420) (0xc000948000) Stream removed, broadcasting: 3\nI0104 14:26:09.084610     505 log.go:172] (0xc000a78420) Data frame received for 1\nI0104 14:26:09.084686     505 log.go:172] (0xc000a78420) (0xc0009480a0) Stream removed, broadcasting: 5\nI0104 14:26:09.084778     505 log.go:172] (0xc0009f2780) (1) Data frame handling\nI0104 14:26:09.084803     505 log.go:172] (0xc0009f2780) (1) Data frame sent\nI0104 14:26:09.084813     505 log.go:172] (0xc000a78420) (0xc0009f2780) Stream removed, broadcasting: 1\nI0104 14:26:09.084824     505 log.go:172] (0xc000a78420) Go away received\nI0104 14:26:09.085241     505 log.go:172] (0xc000a78420) (0xc0009f2780) Stream removed, broadcasting: 1\nI0104 14:26:09.085258     505 log.go:172] (0xc000a78420) (0xc000948000) Stream removed, broadcasting: 3\nI0104 14:26:09.085273     505 log.go:172] (0xc000a78420) (0xc0009480a0) Stream removed, broadcasting: 5\n"
Jan  4 14:26:09.095: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan  4 14:26:09.095: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan  4 14:26:09.095: INFO: Waiting for statefulset status.replicas updated to 0
Jan  4 14:26:09.101: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2
Jan  4 14:26:19.146: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan  4 14:26:19.147: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Jan  4 14:26:19.147: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Jan  4 14:26:19.197: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan  4 14:26:19.197: INFO: ss-0  iruya-node                 Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:23 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:26:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:26:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:23 +0000 UTC  }]
Jan  4 14:26:19.197: INFO: ss-1  iruya-server-sfge57q7djm7  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:54 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:26:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:26:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:53 +0000 UTC  }]
Jan  4 14:26:19.197: INFO: ss-2  iruya-node                 Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:54 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:26:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:26:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:54 +0000 UTC  }]
Jan  4 14:26:19.198: INFO: 
Jan  4 14:26:19.198: INFO: StatefulSet ss has not reached scale 0, at 3
Jan  4 14:26:20.960: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan  4 14:26:20.960: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:23 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:26:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:26:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:23 +0000 UTC  }]
Jan  4 14:26:20.960: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:54 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:26:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:26:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:53 +0000 UTC  }]
Jan  4 14:26:20.960: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:54 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:26:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:26:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:54 +0000 UTC  }]
Jan  4 14:26:20.960: INFO: 
Jan  4 14:26:20.960: INFO: StatefulSet ss has not reached scale 0, at 3
Jan  4 14:26:21.978: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan  4 14:26:21.978: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:23 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:26:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:26:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:23 +0000 UTC  }]
Jan  4 14:26:21.978: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:54 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:26:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:26:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:53 +0000 UTC  }]
Jan  4 14:26:21.978: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:54 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:26:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:26:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:54 +0000 UTC  }]
Jan  4 14:26:21.978: INFO: 
Jan  4 14:26:21.978: INFO: StatefulSet ss has not reached scale 0, at 3
Jan  4 14:26:22.989: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan  4 14:26:22.989: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:23 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:26:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:26:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:23 +0000 UTC  }]
Jan  4 14:26:22.989: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:54 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:26:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:26:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:53 +0000 UTC  }]
Jan  4 14:26:22.989: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:54 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:26:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:26:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:54 +0000 UTC  }]
Jan  4 14:26:22.989: INFO: 
Jan  4 14:26:22.989: INFO: StatefulSet ss has not reached scale 0, at 3
Jan  4 14:26:24.007: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan  4 14:26:24.008: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:23 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:26:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:26:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:23 +0000 UTC  }]
Jan  4 14:26:24.008: INFO: ss-1  iruya-server-sfge57q7djm7  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:54 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:26:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:26:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:53 +0000 UTC  }]
Jan  4 14:26:24.008: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:54 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:26:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:26:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:54 +0000 UTC  }]
Jan  4 14:26:24.008: INFO: 
Jan  4 14:26:24.008: INFO: StatefulSet ss has not reached scale 0, at 3
Jan  4 14:26:25.046: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan  4 14:26:25.046: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:23 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:26:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:26:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:23 +0000 UTC  }]
Jan  4 14:26:25.047: INFO: ss-1  iruya-server-sfge57q7djm7  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:54 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:26:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:26:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:53 +0000 UTC  }]
Jan  4 14:26:25.047: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:54 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:26:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:26:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:54 +0000 UTC  }]
Jan  4 14:26:25.047: INFO: 
Jan  4 14:26:25.047: INFO: StatefulSet ss has not reached scale 0, at 3
Jan  4 14:26:26.055: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan  4 14:26:26.056: INFO: ss-0  iruya-node                 Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:23 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:26:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:26:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:23 +0000 UTC  }]
Jan  4 14:26:26.056: INFO: ss-1  iruya-server-sfge57q7djm7  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:54 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:26:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:26:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:53 +0000 UTC  }]
Jan  4 14:26:26.056: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:54 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:26:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:26:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:54 +0000 UTC  }]
Jan  4 14:26:26.056: INFO: 
Jan  4 14:26:26.056: INFO: StatefulSet ss has not reached scale 0, at 3
Jan  4 14:26:27.082: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan  4 14:26:27.082: INFO: ss-0  iruya-node                 Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:23 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:26:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:26:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:23 +0000 UTC  }]
Jan  4 14:26:27.082: INFO: ss-1  iruya-server-sfge57q7djm7  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:54 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:26:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:26:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:53 +0000 UTC  }]
Jan  4 14:26:27.082: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:54 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:26:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:26:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:54 +0000 UTC  }]
Jan  4 14:26:27.082: INFO: 
Jan  4 14:26:27.082: INFO: StatefulSet ss has not reached scale 0, at 3
Jan  4 14:26:28.094: INFO: POD   NODE        PHASE    GRACE  CONDITIONS
Jan  4 14:26:28.094: INFO: ss-2  iruya-node  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:54 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:26:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:26:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:54 +0000 UTC  }]
Jan  4 14:26:28.094: INFO: 
Jan  4 14:26:28.094: INFO: StatefulSet ss has not reached scale 0, at 1
Jan  4 14:26:29.102: INFO: POD   NODE        PHASE    GRACE  CONDITIONS
Jan  4 14:26:29.102: INFO: ss-2  iruya-node  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:54 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:26:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:26:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 14:25:54 +0000 UTC  }]
Jan  4 14:26:29.102: INFO: 
Jan  4 14:26:29.102: INFO: StatefulSet ss has not reached scale 0, at 1
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-3938
Jan  4 14:26:30.112: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3938 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  4 14:26:30.352: INFO: rc: 1
Jan  4 14:26:30.353: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3938 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    error: unable to upgrade connection: container not found ("nginx")
 []  0xc002bf5620 exit status 1   true [0xc0005734c8 0xc000573530 0xc000573588] [0xc0005734c8 0xc000573530 0xc000573588] [0xc000573518 0xc000573580] [0xba6c50 0xba6c50] 0xc00260b260 }:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("nginx")

error:
exit status 1
Jan  4 14:26:40.354: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3938 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  4 14:26:40.504: INFO: rc: 1
Jan  4 14:26:40.504: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3938 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc002bf56e0 exit status 1   true [0xc000573598 0xc000573630 0xc000573690] [0xc000573598 0xc000573630 0xc000573690] [0xc0005735b8 0xc000573670] [0xba6c50 0xba6c50] 0xc00260b7a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan  4 14:26:50.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3938 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  4 14:26:50.621: INFO: rc: 1
Jan  4 14:26:50.622: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3938 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc002bf57d0 exit status 1   true [0xc0005736b0 0xc000573740 0xc000573828] [0xc0005736b0 0xc000573740 0xc000573828] [0xc000573700 0xc0005737d8] [0xba6c50 0xba6c50] 0xc00260bb00 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan  4 14:27:00.622: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3938 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  4 14:27:00.733: INFO: rc: 1
Jan  4 14:27:00.734: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3938 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc002701740 exit status 1   true [0xc000574ca0 0xc000574d48 0xc000574da0] [0xc000574ca0 0xc000574d48 0xc000574da0] [0xc000574d10 0xc000574d80] [0xba6c50 0xba6c50] 0xc002ab8d80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan  4 14:27:10.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3938 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  4 14:27:10.949: INFO: rc: 1
Jan  4 14:27:10.949: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3938 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0019809f0 exit status 1   true [0xc000011a38 0xc000011aa8 0xc000011b10] [0xc000011a38 0xc000011aa8 0xc000011b10] [0xc000011a68 0xc000011af0] [0xba6c50 0xba6c50] 0xc0026edb00 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan  4 14:27:20.949: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3938 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  4 14:27:21.040: INFO: rc: 1
Jan  4 14:27:21.041: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3938 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc002701800 exit status 1   true [0xc000574e00 0xc000574ed8 0xc000574f18] [0xc000574e00 0xc000574ed8 0xc000574f18] [0xc000574ea0 0xc000574f08] [0xba6c50 0xba6c50] 0xc002ab9080 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan  4 14:27:31.041: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3938 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  4 14:27:31.201: INFO: rc: 1
Jan  4 14:27:31.202: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3938 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0027018f0 exit status 1   true [0xc000574f28 0xc000575088 0xc0005751b8] [0xc000574f28 0xc000575088 0xc0005751b8] [0xc000574fd0 0xc000575188] [0xba6c50 0xba6c50] 0xc002ab9380 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan  4 14:27:41.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3938 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  4 14:27:41.359: INFO: rc: 1
Jan  4 14:27:41.359: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3938 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc002bf58c0 exit status 1   true [0xc000573830 0xc000573860 0xc0005738a8] [0xc000573830 0xc000573860 0xc0005738a8] [0xc000573850 0xc000573888] [0xba6c50 0xba6c50] 0xc00260be60 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan  4 14:27:51.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3938 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  4 14:27:51.494: INFO: rc: 1
Jan  4 14:27:51.494: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3938 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc002701a10 exit status 1   true [0xc000575210 0xc0005753d0 0xc000575488] [0xc000575210 0xc0005753d0 0xc000575488] [0xc000575390 0xc000575458] [0xba6c50 0xba6c50] 0xc002ab9e60 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan  4 14:28:01.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3938 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  4 14:28:01.754: INFO: rc: 1
Jan  4 14:28:01.755: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3938 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0022b2090 exit status 1   true [0xc00092c218 0xc00092c408 0xc00092cbf0] [0xc00092c218 0xc00092c408 0xc00092cbf0] [0xc00092c330 0xc00092cb18] [0xba6c50 0xba6c50] 0xc001ce8240 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan  4 14:28:11.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3938 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  4 14:28:11.952: INFO: rc: 1
Jan  4 14:28:11.953: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3938 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc002fd8090 exit status 1   true [0xc000188000 0xc000574388 0xc000574478] [0xc000188000 0xc000574388 0xc000574478] [0xc0005741f8 0xc000574420] [0xba6c50 0xba6c50] 0xc002ab86c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan  4 14:28:21.953: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3938 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  4 14:28:22.091: INFO: rc: 1
Jan  4 14:28:22.092: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3938 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc002fd8150 exit status 1   true [0xc000574498 0xc000574630 0xc000574808] [0xc000574498 0xc000574630 0xc000574808] [0xc0005745b8 0xc0005747d8] [0xba6c50 0xba6c50] 0xc002ab8ae0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan  4 14:28:32.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3938 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  4 14:28:32.233: INFO: rc: 1
Jan  4 14:28:32.233: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3938 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0022b2180 exit status 1   true [0xc00092cd00 0xc00092cee0 0xc00092d258] [0xc00092cd00 0xc00092cee0 0xc00092d258] [0xc00092cdf8 0xc00092d200] [0xba6c50 0xba6c50] 0xc001ce8b40 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan  4 14:28:42.234: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3938 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  4 14:28:42.326: INFO: rc: 1
Jan  4 14:28:42.326: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3938 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0012a80f0 exit status 1   true [0xc001db2000 0xc001db20c8 0xc001db20e0] [0xc001db2000 0xc001db20c8 0xc001db20e0] [0xc001db2080 0xc001db20d8] [0xba6c50 0xba6c50] 0xc002a5c300 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan  4 14:28:52.327: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3938 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  4 14:28:52.409: INFO: rc: 1
Jan  4 14:28:52.409: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3938 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0012a81e0 exit status 1   true [0xc001db2100 0xc001db2138 0xc001db21a8] [0xc001db2100 0xc001db2138 0xc001db21a8] [0xc001db2128 0xc001db2158] [0xba6c50 0xba6c50] 0xc002a5c660 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan  4 14:29:02.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3938 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  4 14:29:02.565: INFO: rc: 1
Jan  4 14:29:02.566: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3938 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0022b22a0 exit status 1   true [0xc00092d2a8 0xc00092d410 0xc00092d648] [0xc00092d2a8 0xc00092d410 0xc00092d648] [0xc00092d3a0 0xc00092d530] [0xba6c50 0xba6c50] 0xc001ce9620 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan  4 14:29:12.567: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3938 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  4 14:29:12.704: INFO: rc: 1
Jan  4 14:29:12.704: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3938 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc002fd8210 exit status 1   true [0xc000574888 0xc000574998 0xc000574b10] [0xc000574888 0xc000574998 0xc000574b10] [0xc0005748e0 0xc000574ae0] [0xba6c50 0xba6c50] 0xc002ab8de0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan  4 14:29:22.705: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3938 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  4 14:29:22.802: INFO: rc: 1
Jan  4 14:29:22.802: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3938 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0023f80c0 exit status 1   true [0xc000572078 0xc0005724f0 0xc0005725a8] [0xc000572078 0xc0005724f0 0xc0005725a8] [0xc000572348 0xc000572518] [0xba6c50 0xba6c50] 0xc00260a540 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan  4 14:29:32.803: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3938 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  4 14:29:32.924: INFO: rc: 1
Jan  4 14:29:32.924: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3938 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc002fd8330 exit status 1   true [0xc000574b68 0xc000574c30 0xc000574ca0] [0xc000574b68 0xc000574c30 0xc000574ca0] [0xc000574bf8 0xc000574c80] [0xba6c50 0xba6c50] 0xc002ab90e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan  4 14:29:42.924: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3938 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  4 14:29:43.048: INFO: rc: 1
Jan  4 14:29:43.049: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3938 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc002fd83f0 exit status 1   true [0xc000574cd0 0xc000574d70 0xc000574e00] [0xc000574cd0 0xc000574d70 0xc000574e00] [0xc000574d48 0xc000574da0] [0xba6c50 0xba6c50] 0xc002ab9440 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan  4 14:29:53.049: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3938 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  4 14:29:53.167: INFO: rc: 1
Jan  4 14:29:53.168: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3938 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0012a8300 exit status 1   true [0xc001db21f0 0xc001db2248 0xc001db2320] [0xc001db21f0 0xc001db2248 0xc001db2320] [0xc001db2238 0xc001db22f8] [0xba6c50 0xba6c50] 0xc002a5c960 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan  4 14:30:03.168: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3938 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  4 14:30:03.331: INFO: rc: 1
Jan  4 14:30:03.331: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3938 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0022b20c0 exit status 1   true [0xc000188060 0xc00092c218 0xc00092c408] [0xc000188060 0xc00092c218 0xc00092c408] [0xc00092c160 0xc00092c330] [0xba6c50 0xba6c50] 0xc001ce8240 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan  4 14:30:13.332: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3938 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  4 14:30:13.563: INFO: rc: 1
Jan  4 14:30:13.563: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3938 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0023f8090 exit status 1   true [0xc000572078 0xc0005724f0 0xc0005725a8] [0xc000572078 0xc0005724f0 0xc0005725a8] [0xc000572348 0xc000572518] [0xba6c50 0xba6c50] 0xc00260a540 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan  4 14:30:23.564: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3938 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  4 14:30:23.701: INFO: rc: 1
Jan  4 14:30:23.702: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3938 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0012a8090 exit status 1   true [0xc001db2000 0xc001db20c8 0xc001db20e0] [0xc001db2000 0xc001db20c8 0xc001db20e0] [0xc001db2080 0xc001db20d8] [0xba6c50 0xba6c50] 0xc002a5c300 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan  4 14:30:33.702: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3938 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  4 14:30:33.884: INFO: rc: 1
Jan  4 14:30:33.885: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3938 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0023f81e0 exit status 1   true [0xc000572668 0xc0005727f8 0xc000572870] [0xc000572668 0xc0005727f8 0xc000572870] [0xc0005727a0 0xc000572848] [0xba6c50 0xba6c50] 0xc00260a8a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan  4 14:30:43.885: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3938 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  4 14:30:44.157: INFO: rc: 1
Jan  4 14:30:44.158: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3938 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0023f82a0 exit status 1   true [0xc0005728a0 0xc000572958 0xc000572b48] [0xc0005728a0 0xc000572958 0xc000572b48] [0xc000572930 0xc0005729f0] [0xba6c50 0xba6c50] 0xc00260ac60 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan  4 14:30:54.159: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3938 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  4 14:30:54.344: INFO: rc: 1
Jan  4 14:30:54.345: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3938 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0012a8270 exit status 1   true [0xc001db2100 0xc001db2138 0xc001db21a8] [0xc001db2100 0xc001db2138 0xc001db21a8] [0xc001db2128 0xc001db2158] [0xba6c50 0xba6c50] 0xc002a5c660 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan  4 14:31:04.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3938 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  4 14:31:04.436: INFO: rc: 1
Jan  4 14:31:04.436: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3938 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc002fd80c0 exit status 1   true [0xc0005741f8 0xc000574420 0xc000574508] [0xc0005741f8 0xc000574420 0xc000574508] [0xc0005743b8 0xc000574498] [0xba6c50 0xba6c50] 0xc002ab86c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan  4 14:31:14.437: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3938 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  4 14:31:14.654: INFO: rc: 1
Jan  4 14:31:14.655: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3938 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0022b2210 exit status 1   true [0xc00092c528 0xc00092cd00 0xc00092cee0] [0xc00092c528 0xc00092cd00 0xc00092cee0] [0xc00092cbf0 0xc00092cdf8] [0xba6c50 0xba6c50] 0xc001ce8b40 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan  4 14:31:24.655: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3938 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  4 14:31:24.741: INFO: rc: 1
Jan  4 14:31:24.742: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3938 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0012a8390 exit status 1   true [0xc001db21f0 0xc001db2248 0xc001db2320] [0xc001db21f0 0xc001db2248 0xc001db2320] [0xc001db2238 0xc001db22f8] [0xba6c50 0xba6c50] 0xc002a5c960 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan  4 14:31:34.742: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3938 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  4 14:31:34.851: INFO: rc: 1
Jan  4 14:31:34.852: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: 
Jan  4 14:31:34.852: INFO: Scaling statefulset ss to 0
Jan  4 14:31:34.870: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Jan  4 14:31:34.874: INFO: Deleting all statefulset in ns statefulset-3938
Jan  4 14:31:34.877: INFO: Scaling statefulset ss to 0
Jan  4 14:31:34.892: INFO: Waiting for statefulset status.replicas updated to 0
Jan  4 14:31:34.895: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 14:31:34.960: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-3938" for this suite.
Jan  4 14:31:40.989: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 14:31:41.121: INFO: namespace statefulset-3938 deletion completed in 6.156038694s

• [SLOW TEST:378.276 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Burst scaling should run to completion even with unhealthy pods [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-apps] ReplicationController 
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 14:31:41.122: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan  4 14:31:41.173: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace
STEP: Creating rc "condition-test" that asks for more than the allowed pod quota
STEP: Checking rc "condition-test" has the desired failure condition set
STEP: Scaling down rc "condition-test" to satisfy pod quota
Jan  4 14:31:43.336: INFO: Updating replication controller "condition-test"
STEP: Checking rc "condition-test" has no failure condition set
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 14:31:43.646: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-8136" for this suite.
Jan  4 14:31:56.140: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 14:31:56.247: INFO: namespace replication-controller-8136 deletion completed in 12.232871547s

• [SLOW TEST:15.125 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 14:31:56.247: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Jan  4 14:31:56.432: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 14:32:14.897: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-3727" for this suite.
Jan  4 14:32:21.002: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 14:32:21.173: INFO: namespace init-container-3727 deletion completed in 6.196480752s

• [SLOW TEST:24.926 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 14:32:21.173: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on node default medium
Jan  4 14:32:21.256: INFO: Waiting up to 5m0s for pod "pod-b1f9cf56-ddc1-4331-9814-764f42b97736" in namespace "emptydir-5469" to be "success or failure"
Jan  4 14:32:21.261: INFO: Pod "pod-b1f9cf56-ddc1-4331-9814-764f42b97736": Phase="Pending", Reason="", readiness=false. Elapsed: 4.281216ms
Jan  4 14:32:23.270: INFO: Pod "pod-b1f9cf56-ddc1-4331-9814-764f42b97736": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013398713s
Jan  4 14:32:25.283: INFO: Pod "pod-b1f9cf56-ddc1-4331-9814-764f42b97736": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026104605s
Jan  4 14:32:27.289: INFO: Pod "pod-b1f9cf56-ddc1-4331-9814-764f42b97736": Phase="Pending", Reason="", readiness=false. Elapsed: 6.032546732s
Jan  4 14:32:29.295: INFO: Pod "pod-b1f9cf56-ddc1-4331-9814-764f42b97736": Phase="Pending", Reason="", readiness=false. Elapsed: 8.038483657s
Jan  4 14:32:31.304: INFO: Pod "pod-b1f9cf56-ddc1-4331-9814-764f42b97736": Phase="Pending", Reason="", readiness=false. Elapsed: 10.047260041s
Jan  4 14:32:33.316: INFO: Pod "pod-b1f9cf56-ddc1-4331-9814-764f42b97736": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.059250181s
STEP: Saw pod success
Jan  4 14:32:33.316: INFO: Pod "pod-b1f9cf56-ddc1-4331-9814-764f42b97736" satisfied condition "success or failure"
Jan  4 14:32:33.321: INFO: Trying to get logs from node iruya-node pod pod-b1f9cf56-ddc1-4331-9814-764f42b97736 container test-container: 
STEP: delete the pod
Jan  4 14:32:33.450: INFO: Waiting for pod pod-b1f9cf56-ddc1-4331-9814-764f42b97736 to disappear
Jan  4 14:32:33.464: INFO: Pod pod-b1f9cf56-ddc1-4331-9814-764f42b97736 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 14:32:33.465: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5469" for this suite.
Jan  4 14:32:39.493: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 14:32:39.631: INFO: namespace emptydir-5469 deletion completed in 6.160142736s

• [SLOW TEST:18.458 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 14:32:39.632: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan  4 14:33:00.136: INFO: Waiting up to 5m0s for pod "client-envvars-0a497b14-e964-4f76-966b-acff5265e5d2" in namespace "pods-4880" to be "success or failure"
Jan  4 14:33:00.160: INFO: Pod "client-envvars-0a497b14-e964-4f76-966b-acff5265e5d2": Phase="Pending", Reason="", readiness=false. Elapsed: 23.33893ms
Jan  4 14:33:02.174: INFO: Pod "client-envvars-0a497b14-e964-4f76-966b-acff5265e5d2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03693728s
Jan  4 14:33:04.185: INFO: Pod "client-envvars-0a497b14-e964-4f76-966b-acff5265e5d2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.048277031s
Jan  4 14:33:06.200: INFO: Pod "client-envvars-0a497b14-e964-4f76-966b-acff5265e5d2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.063129213s
Jan  4 14:33:08.215: INFO: Pod "client-envvars-0a497b14-e964-4f76-966b-acff5265e5d2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.078419342s
Jan  4 14:33:10.228: INFO: Pod "client-envvars-0a497b14-e964-4f76-966b-acff5265e5d2": Phase="Pending", Reason="", readiness=false. Elapsed: 10.090842634s
Jan  4 14:33:12.262: INFO: Pod "client-envvars-0a497b14-e964-4f76-966b-acff5265e5d2": Phase="Pending", Reason="", readiness=false. Elapsed: 12.125266481s
Jan  4 14:33:14.277: INFO: Pod "client-envvars-0a497b14-e964-4f76-966b-acff5265e5d2": Phase="Running", Reason="", readiness=true. Elapsed: 14.140587517s
Jan  4 14:33:16.285: INFO: Pod "client-envvars-0a497b14-e964-4f76-966b-acff5265e5d2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.147947962s
STEP: Saw pod success
Jan  4 14:33:16.285: INFO: Pod "client-envvars-0a497b14-e964-4f76-966b-acff5265e5d2" satisfied condition "success or failure"
Jan  4 14:33:16.288: INFO: Trying to get logs from node iruya-node pod client-envvars-0a497b14-e964-4f76-966b-acff5265e5d2 container env3cont: 
STEP: delete the pod
Jan  4 14:33:16.399: INFO: Waiting for pod client-envvars-0a497b14-e964-4f76-966b-acff5265e5d2 to disappear
Jan  4 14:33:16.491: INFO: Pod client-envvars-0a497b14-e964-4f76-966b-acff5265e5d2 no longer exists
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 14:33:16.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-4880" for this suite.
Jan  4 14:34:08.697: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 14:34:08.820: INFO: namespace pods-4880 deletion completed in 52.317356699s

• [SLOW TEST:89.188 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 14:34:08.821: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-e797c7b3-5b7f-4417-ac20-88afb1d1f012
STEP: Creating a pod to test consume configMaps
Jan  4 14:34:09.134: INFO: Waiting up to 5m0s for pod "pod-configmaps-18d39529-613d-4ba5-ab9c-81c65d1a96cb" in namespace "configmap-1682" to be "success or failure"
Jan  4 14:34:09.154: INFO: Pod "pod-configmaps-18d39529-613d-4ba5-ab9c-81c65d1a96cb": Phase="Pending", Reason="", readiness=false. Elapsed: 19.628081ms
Jan  4 14:34:11.161: INFO: Pod "pod-configmaps-18d39529-613d-4ba5-ab9c-81c65d1a96cb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027302726s
Jan  4 14:34:13.171: INFO: Pod "pod-configmaps-18d39529-613d-4ba5-ab9c-81c65d1a96cb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037371212s
Jan  4 14:34:15.185: INFO: Pod "pod-configmaps-18d39529-613d-4ba5-ab9c-81c65d1a96cb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.051480721s
Jan  4 14:34:17.201: INFO: Pod "pod-configmaps-18d39529-613d-4ba5-ab9c-81c65d1a96cb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.067132503s
Jan  4 14:34:19.209: INFO: Pod "pod-configmaps-18d39529-613d-4ba5-ab9c-81c65d1a96cb": Phase="Pending", Reason="", readiness=false. Elapsed: 10.074566204s
Jan  4 14:34:21.219: INFO: Pod "pod-configmaps-18d39529-613d-4ba5-ab9c-81c65d1a96cb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.085162741s
STEP: Saw pod success
Jan  4 14:34:21.219: INFO: Pod "pod-configmaps-18d39529-613d-4ba5-ab9c-81c65d1a96cb" satisfied condition "success or failure"
Jan  4 14:34:21.224: INFO: Trying to get logs from node iruya-node pod pod-configmaps-18d39529-613d-4ba5-ab9c-81c65d1a96cb container configmap-volume-test: 
STEP: delete the pod
Jan  4 14:34:21.325: INFO: Waiting for pod pod-configmaps-18d39529-613d-4ba5-ab9c-81c65d1a96cb to disappear
Jan  4 14:34:21.411: INFO: Pod pod-configmaps-18d39529-613d-4ba5-ab9c-81c65d1a96cb no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 14:34:21.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1682" for this suite.
Jan  4 14:34:27.436: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 14:34:27.587: INFO: namespace configmap-1682 deletion completed in 6.170313147s

• [SLOW TEST:18.766 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 14:34:27.588: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on node default medium
Jan  4 14:34:27.658: INFO: Waiting up to 5m0s for pod "pod-444a9c9c-8c1d-44d6-906e-1e353c542d91" in namespace "emptydir-7670" to be "success or failure"
Jan  4 14:34:27.667: INFO: Pod "pod-444a9c9c-8c1d-44d6-906e-1e353c542d91": Phase="Pending", Reason="", readiness=false. Elapsed: 8.16849ms
Jan  4 14:34:29.680: INFO: Pod "pod-444a9c9c-8c1d-44d6-906e-1e353c542d91": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021851996s
Jan  4 14:34:31.698: INFO: Pod "pod-444a9c9c-8c1d-44d6-906e-1e353c542d91": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039241796s
Jan  4 14:34:33.708: INFO: Pod "pod-444a9c9c-8c1d-44d6-906e-1e353c542d91": Phase="Pending", Reason="", readiness=false. Elapsed: 6.049755869s
Jan  4 14:34:35.719: INFO: Pod "pod-444a9c9c-8c1d-44d6-906e-1e353c542d91": Phase="Pending", Reason="", readiness=false. Elapsed: 8.060916652s
Jan  4 14:34:37.736: INFO: Pod "pod-444a9c9c-8c1d-44d6-906e-1e353c542d91": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.077503995s
STEP: Saw pod success
Jan  4 14:34:37.736: INFO: Pod "pod-444a9c9c-8c1d-44d6-906e-1e353c542d91" satisfied condition "success or failure"
Jan  4 14:34:37.742: INFO: Trying to get logs from node iruya-node pod pod-444a9c9c-8c1d-44d6-906e-1e353c542d91 container test-container: 
STEP: delete the pod
Jan  4 14:34:37.883: INFO: Waiting for pod pod-444a9c9c-8c1d-44d6-906e-1e353c542d91 to disappear
Jan  4 14:34:37.904: INFO: Pod pod-444a9c9c-8c1d-44d6-906e-1e353c542d91 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 14:34:37.905: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7670" for this suite.
Jan  4 14:34:43.946: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 14:34:44.059: INFO: namespace emptydir-7670 deletion completed in 6.145762738s

• [SLOW TEST:16.471 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 14:34:44.059: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on tmpfs
Jan  4 14:34:44.178: INFO: Waiting up to 5m0s for pod "pod-0d5c4a9f-5b3e-409e-b4ad-c550b811c376" in namespace "emptydir-1057" to be "success or failure"
Jan  4 14:34:44.197: INFO: Pod "pod-0d5c4a9f-5b3e-409e-b4ad-c550b811c376": Phase="Pending", Reason="", readiness=false. Elapsed: 18.327056ms
Jan  4 14:34:46.206: INFO: Pod "pod-0d5c4a9f-5b3e-409e-b4ad-c550b811c376": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027271025s
Jan  4 14:34:48.217: INFO: Pod "pod-0d5c4a9f-5b3e-409e-b4ad-c550b811c376": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038194821s
Jan  4 14:34:50.226: INFO: Pod "pod-0d5c4a9f-5b3e-409e-b4ad-c550b811c376": Phase="Pending", Reason="", readiness=false. Elapsed: 6.047745227s
Jan  4 14:34:52.254: INFO: Pod "pod-0d5c4a9f-5b3e-409e-b4ad-c550b811c376": Phase="Pending", Reason="", readiness=false. Elapsed: 8.075127955s
Jan  4 14:34:54.265: INFO: Pod "pod-0d5c4a9f-5b3e-409e-b4ad-c550b811c376": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.085930976s
STEP: Saw pod success
Jan  4 14:34:54.265: INFO: Pod "pod-0d5c4a9f-5b3e-409e-b4ad-c550b811c376" satisfied condition "success or failure"
Jan  4 14:34:54.270: INFO: Trying to get logs from node iruya-node pod pod-0d5c4a9f-5b3e-409e-b4ad-c550b811c376 container test-container: 
STEP: delete the pod
Jan  4 14:34:54.344: INFO: Waiting for pod pod-0d5c4a9f-5b3e-409e-b4ad-c550b811c376 to disappear
Jan  4 14:34:54.365: INFO: Pod pod-0d5c4a9f-5b3e-409e-b4ad-c550b811c376 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 14:34:54.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1057" for this suite.
Jan  4 14:35:00.507: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 14:35:00.759: INFO: namespace emptydir-1057 deletion completed in 6.367859786s

• [SLOW TEST:16.700 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 14:35:00.760: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name s-test-opt-del-b02b36e0-57a1-4bd8-9664-f77d448073db
STEP: Creating secret with name s-test-opt-upd-6302a2d9-a054-4f83-a987-b8d85cb7010b
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-b02b36e0-57a1-4bd8-9664-f77d448073db
STEP: Updating secret s-test-opt-upd-6302a2d9-a054-4f83-a987-b8d85cb7010b
STEP: Creating secret with name s-test-opt-create-698ec9ab-669b-45a5-b72b-0f96541fe9d9
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 14:35:17.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5810" for this suite.
Jan  4 14:35:41.647: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 14:35:41.775: INFO: namespace projected-5810 deletion completed in 24.155175715s

• [SLOW TEST:41.015 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 14:35:41.776: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan  4 14:35:41.955: INFO: Waiting up to 5m0s for pod "downwardapi-volume-01c43984-fbc0-49f5-a0ae-d24ee75b970b" in namespace "downward-api-1870" to be "success or failure"
Jan  4 14:35:41.965: INFO: Pod "downwardapi-volume-01c43984-fbc0-49f5-a0ae-d24ee75b970b": Phase="Pending", Reason="", readiness=false. Elapsed: 9.22568ms
Jan  4 14:35:43.972: INFO: Pod "downwardapi-volume-01c43984-fbc0-49f5-a0ae-d24ee75b970b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016771387s
Jan  4 14:35:45.983: INFO: Pod "downwardapi-volume-01c43984-fbc0-49f5-a0ae-d24ee75b970b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027654463s
Jan  4 14:35:47.994: INFO: Pod "downwardapi-volume-01c43984-fbc0-49f5-a0ae-d24ee75b970b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.038681834s
Jan  4 14:35:50.001: INFO: Pod "downwardapi-volume-01c43984-fbc0-49f5-a0ae-d24ee75b970b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.045630322s
Jan  4 14:35:52.014: INFO: Pod "downwardapi-volume-01c43984-fbc0-49f5-a0ae-d24ee75b970b": Phase="Pending", Reason="", readiness=false. Elapsed: 10.058586251s
Jan  4 14:35:54.025: INFO: Pod "downwardapi-volume-01c43984-fbc0-49f5-a0ae-d24ee75b970b": Phase="Pending", Reason="", readiness=false. Elapsed: 12.069367657s
Jan  4 14:35:56.034: INFO: Pod "downwardapi-volume-01c43984-fbc0-49f5-a0ae-d24ee75b970b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.078260081s
STEP: Saw pod success
Jan  4 14:35:56.034: INFO: Pod "downwardapi-volume-01c43984-fbc0-49f5-a0ae-d24ee75b970b" satisfied condition "success or failure"
Jan  4 14:35:56.039: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-01c43984-fbc0-49f5-a0ae-d24ee75b970b container client-container: 
STEP: delete the pod
Jan  4 14:35:56.176: INFO: Waiting for pod downwardapi-volume-01c43984-fbc0-49f5-a0ae-d24ee75b970b to disappear
Jan  4 14:35:56.185: INFO: Pod downwardapi-volume-01c43984-fbc0-49f5-a0ae-d24ee75b970b no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 14:35:56.185: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1870" for this suite.
Jan  4 14:36:02.213: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 14:36:02.310: INFO: namespace downward-api-1870 deletion completed in 6.119397065s

• [SLOW TEST:20.535 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 14:36:02.311: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name cm-test-opt-del-c87de685-dcfe-4640-8836-3a0293f44d45
STEP: Creating configMap with name cm-test-opt-upd-41595dd8-307e-491b-959c-b4d7c3c913d2
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-c87de685-dcfe-4640-8836-3a0293f44d45
STEP: Updating configmap cm-test-opt-upd-41595dd8-307e-491b-959c-b4d7c3c913d2
STEP: Creating configMap with name cm-test-opt-create-4c1f4a6e-c07e-4fd2-85fb-bd14de4d901e
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 14:37:32.593: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4909" for this suite.
Jan  4 14:38:12.638: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 14:38:12.738: INFO: namespace projected-4909 deletion completed in 40.134304371s

• [SLOW TEST:130.428 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 14:38:12.740: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-map-17144a54-e714-44cf-97ac-543b11529d45
STEP: Creating a pod to test consume secrets
Jan  4 14:38:12.844: INFO: Waiting up to 5m0s for pod "pod-secrets-a4d1128c-50e6-4f68-a405-6c79415e3ee9" in namespace "secrets-8489" to be "success or failure"
Jan  4 14:38:12.854: INFO: Pod "pod-secrets-a4d1128c-50e6-4f68-a405-6c79415e3ee9": Phase="Pending", Reason="", readiness=false. Elapsed: 9.256704ms
Jan  4 14:38:14.890: INFO: Pod "pod-secrets-a4d1128c-50e6-4f68-a405-6c79415e3ee9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045704038s
Jan  4 14:38:17.219: INFO: Pod "pod-secrets-a4d1128c-50e6-4f68-a405-6c79415e3ee9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.374082721s
Jan  4 14:38:19.226: INFO: Pod "pod-secrets-a4d1128c-50e6-4f68-a405-6c79415e3ee9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.38169442s
Jan  4 14:38:21.242: INFO: Pod "pod-secrets-a4d1128c-50e6-4f68-a405-6c79415e3ee9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.39769237s
Jan  4 14:38:23.251: INFO: Pod "pod-secrets-a4d1128c-50e6-4f68-a405-6c79415e3ee9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.406432913s
STEP: Saw pod success
Jan  4 14:38:23.251: INFO: Pod "pod-secrets-a4d1128c-50e6-4f68-a405-6c79415e3ee9" satisfied condition "success or failure"
Jan  4 14:38:23.257: INFO: Trying to get logs from node iruya-node pod pod-secrets-a4d1128c-50e6-4f68-a405-6c79415e3ee9 container secret-volume-test: 
STEP: delete the pod
Jan  4 14:38:23.384: INFO: Waiting for pod pod-secrets-a4d1128c-50e6-4f68-a405-6c79415e3ee9 to disappear
Jan  4 14:38:23.392: INFO: Pod pod-secrets-a4d1128c-50e6-4f68-a405-6c79415e3ee9 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 14:38:23.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8489" for this suite.
Jan  4 14:38:29.429: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 14:38:29.643: INFO: namespace secrets-8489 deletion completed in 6.242885342s

• [SLOW TEST:16.904 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 14:38:29.643: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-800
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan  4 14:38:29.734: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan  4 14:39:04.042: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=http&host=10.44.0.1&port=8080&tries=1'] Namespace:pod-network-test-800 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  4 14:39:04.042: INFO: >>> kubeConfig: /root/.kube/config
I0104 14:39:04.163945       8 log.go:172] (0xc00126e8f0) (0xc0018ab680) Create stream
I0104 14:39:04.164122       8 log.go:172] (0xc00126e8f0) (0xc0018ab680) Stream added, broadcasting: 1
I0104 14:39:04.172163       8 log.go:172] (0xc00126e8f0) Reply frame received for 1
I0104 14:39:04.172190       8 log.go:172] (0xc00126e8f0) (0xc0003d7180) Create stream
I0104 14:39:04.172196       8 log.go:172] (0xc00126e8f0) (0xc0003d7180) Stream added, broadcasting: 3
I0104 14:39:04.173863       8 log.go:172] (0xc00126e8f0) Reply frame received for 3
I0104 14:39:04.173885       8 log.go:172] (0xc00126e8f0) (0xc000101900) Create stream
I0104 14:39:04.173892       8 log.go:172] (0xc00126e8f0) (0xc000101900) Stream added, broadcasting: 5
I0104 14:39:04.181344       8 log.go:172] (0xc00126e8f0) Reply frame received for 5
I0104 14:39:04.455089       8 log.go:172] (0xc00126e8f0) Data frame received for 3
I0104 14:39:04.455202       8 log.go:172] (0xc0003d7180) (3) Data frame handling
I0104 14:39:04.455221       8 log.go:172] (0xc0003d7180) (3) Data frame sent
I0104 14:39:04.761044       8 log.go:172] (0xc00126e8f0) (0xc0003d7180) Stream removed, broadcasting: 3
I0104 14:39:04.761157       8 log.go:172] (0xc00126e8f0) Data frame received for 1
I0104 14:39:04.761196       8 log.go:172] (0xc00126e8f0) (0xc000101900) Stream removed, broadcasting: 5
I0104 14:39:04.761243       8 log.go:172] (0xc0018ab680) (1) Data frame handling
I0104 14:39:04.761264       8 log.go:172] (0xc0018ab680) (1) Data frame sent
I0104 14:39:04.761277       8 log.go:172] (0xc00126e8f0) (0xc0018ab680) Stream removed, broadcasting: 1
I0104 14:39:04.761331       8 log.go:172] (0xc00126e8f0) Go away received
I0104 14:39:04.761617       8 log.go:172] (0xc00126e8f0) (0xc0018ab680) Stream removed, broadcasting: 1
I0104 14:39:04.761640       8 log.go:172] (0xc00126e8f0) (0xc0003d7180) Stream removed, broadcasting: 3
I0104 14:39:04.761654       8 log.go:172] (0xc00126e8f0) (0xc000101900) Stream removed, broadcasting: 5
Jan  4 14:39:04.761: INFO: Waiting for endpoints: map[]
Jan  4 14:39:04.774: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:pod-network-test-800 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  4 14:39:04.774: INFO: >>> kubeConfig: /root/.kube/config
I0104 14:39:04.900210       8 log.go:172] (0xc000c61340) (0xc0003d7540) Create stream
I0104 14:39:04.900414       8 log.go:172] (0xc000c61340) (0xc0003d7540) Stream added, broadcasting: 1
I0104 14:39:04.909468       8 log.go:172] (0xc000c61340) Reply frame received for 1
I0104 14:39:04.909536       8 log.go:172] (0xc000c61340) (0xc001f823c0) Create stream
I0104 14:39:04.909545       8 log.go:172] (0xc000c61340) (0xc001f823c0) Stream added, broadcasting: 3
I0104 14:39:04.912581       8 log.go:172] (0xc000c61340) Reply frame received for 3
I0104 14:39:04.912602       8 log.go:172] (0xc000c61340) (0xc0003d7720) Create stream
I0104 14:39:04.912608       8 log.go:172] (0xc000c61340) (0xc0003d7720) Stream added, broadcasting: 5
I0104 14:39:04.914238       8 log.go:172] (0xc000c61340) Reply frame received for 5
I0104 14:39:05.050987       8 log.go:172] (0xc000c61340) Data frame received for 3
I0104 14:39:05.051055       8 log.go:172] (0xc001f823c0) (3) Data frame handling
I0104 14:39:05.051093       8 log.go:172] (0xc001f823c0) (3) Data frame sent
I0104 14:39:05.292632       8 log.go:172] (0xc000c61340) Data frame received for 1
I0104 14:39:05.292800       8 log.go:172] (0xc000c61340) (0xc001f823c0) Stream removed, broadcasting: 3
I0104 14:39:05.292865       8 log.go:172] (0xc0003d7540) (1) Data frame handling
I0104 14:39:05.292884       8 log.go:172] (0xc0003d7540) (1) Data frame sent
I0104 14:39:05.292907       8 log.go:172] (0xc000c61340) (0xc0003d7720) Stream removed, broadcasting: 5
I0104 14:39:05.293007       8 log.go:172] (0xc000c61340) (0xc0003d7540) Stream removed, broadcasting: 1
I0104 14:39:05.293157       8 log.go:172] (0xc000c61340) Go away received
I0104 14:39:05.293361       8 log.go:172] (0xc000c61340) (0xc0003d7540) Stream removed, broadcasting: 1
I0104 14:39:05.293387       8 log.go:172] (0xc000c61340) (0xc001f823c0) Stream removed, broadcasting: 3
I0104 14:39:05.293404       8 log.go:172] (0xc000c61340) (0xc0003d7720) Stream removed, broadcasting: 5
Jan  4 14:39:05.293: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 14:39:05.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-800" for this suite.
Jan  4 14:39:31.342: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 14:39:31.502: INFO: namespace pod-network-test-800 deletion completed in 26.196233983s

• [SLOW TEST:61.859 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-api-machinery] Aggregator 
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 14:39:31.502: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename aggregator
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76
Jan  4 14:39:31.615: INFO: >>> kubeConfig: /root/.kube/config
[It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Registering the sample API server.
Jan  4 14:39:32.707: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set
Jan  4 14:39:35.097: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713745572, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713745572, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713745572, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713745572, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  4 14:39:37.122: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713745572, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713745572, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713745572, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713745572, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  4 14:39:39.106: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713745572, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713745572, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713745572, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713745572, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  4 14:39:41.114: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713745572, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713745572, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713745572, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713745572, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  4 14:39:43.112: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713745572, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713745572, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713745572, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713745572, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  4 14:39:45.105: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713745572, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713745572, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713745572, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713745572, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  4 14:39:47.108: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713745572, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713745572, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713745572, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713745572, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  4 14:39:49.629: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713745572, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713745572, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713745572, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713745572, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  4 14:39:54.557: INFO: Waited 3.439784485s for the sample-apiserver to be ready to handle requests.
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 14:39:54.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "aggregator-5715" for this suite.
Jan  4 14:40:01.121: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 14:40:01.245: INFO: namespace aggregator-5715 deletion completed in 6.25818891s

• [SLOW TEST:29.743 seconds]
[sig-api-machinery] Aggregator
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 14:40:01.246: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Jan  4 14:40:01.397: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan  4 14:40:01.410: INFO: Waiting for terminating namespaces to be deleted...
Jan  4 14:40:01.412: INFO: 
Logging pods the kubelet thinks is on node iruya-node before test
Jan  4 14:40:01.423: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded)
Jan  4 14:40:01.424: INFO: 	Container kube-proxy ready: true, restart count 0
Jan  4 14:40:01.424: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded)
Jan  4 14:40:01.424: INFO: 	Container weave ready: true, restart count 0
Jan  4 14:40:01.424: INFO: 	Container weave-npc ready: true, restart count 0
Jan  4 14:40:01.424: INFO: 
Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test
Jan  4 14:40:01.435: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded)
Jan  4 14:40:01.436: INFO: 	Container kube-apiserver ready: true, restart count 0
Jan  4 14:40:01.436: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded)
Jan  4 14:40:01.436: INFO: 	Container kube-scheduler ready: true, restart count 12
Jan  4 14:40:01.436: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Jan  4 14:40:01.436: INFO: 	Container coredns ready: true, restart count 0
Jan  4 14:40:01.436: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded)
Jan  4 14:40:01.436: INFO: 	Container etcd ready: true, restart count 0
Jan  4 14:40:01.436: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded)
Jan  4 14:40:01.436: INFO: 	Container weave ready: true, restart count 0
Jan  4 14:40:01.436: INFO: 	Container weave-npc ready: true, restart count 0
Jan  4 14:40:01.436: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Jan  4 14:40:01.436: INFO: 	Container coredns ready: true, restart count 0
Jan  4 14:40:01.436: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded)
Jan  4 14:40:01.436: INFO: 	Container kube-controller-manager ready: true, restart count 17
Jan  4 14:40:01.436: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded)
Jan  4 14:40:01.436: INFO: 	Container kube-proxy ready: true, restart count 0
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: verifying the node has the label node iruya-node
STEP: verifying the node has the label node iruya-server-sfge57q7djm7
Jan  4 14:40:01.605: INFO: Pod coredns-5c98db65d4-bm4gs requesting resource cpu=100m on Node iruya-server-sfge57q7djm7
Jan  4 14:40:01.605: INFO: Pod coredns-5c98db65d4-xx8w8 requesting resource cpu=100m on Node iruya-server-sfge57q7djm7
Jan  4 14:40:01.605: INFO: Pod etcd-iruya-server-sfge57q7djm7 requesting resource cpu=0m on Node iruya-server-sfge57q7djm7
Jan  4 14:40:01.605: INFO: Pod kube-apiserver-iruya-server-sfge57q7djm7 requesting resource cpu=250m on Node iruya-server-sfge57q7djm7
Jan  4 14:40:01.605: INFO: Pod kube-controller-manager-iruya-server-sfge57q7djm7 requesting resource cpu=200m on Node iruya-server-sfge57q7djm7
Jan  4 14:40:01.605: INFO: Pod kube-proxy-58v95 requesting resource cpu=0m on Node iruya-server-sfge57q7djm7
Jan  4 14:40:01.605: INFO: Pod kube-proxy-976zl requesting resource cpu=0m on Node iruya-node
Jan  4 14:40:01.605: INFO: Pod kube-scheduler-iruya-server-sfge57q7djm7 requesting resource cpu=100m on Node iruya-server-sfge57q7djm7
Jan  4 14:40:01.605: INFO: Pod weave-net-bzl4d requesting resource cpu=20m on Node iruya-server-sfge57q7djm7
Jan  4 14:40:01.605: INFO: Pod weave-net-rlp57 requesting resource cpu=20m on Node iruya-node
STEP: Starting Pods to consume most of the cluster CPU.
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-3711bf63-c362-4020-b121-881e7aa28d52.15e6b5f94e49ace8], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2965/filler-pod-3711bf63-c362-4020-b121-881e7aa28d52 to iruya-server-sfge57q7djm7]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-3711bf63-c362-4020-b121-881e7aa28d52.15e6b5fa95c64d25], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-3711bf63-c362-4020-b121-881e7aa28d52.15e6b5fba32933a0], Reason = [Created], Message = [Created container filler-pod-3711bf63-c362-4020-b121-881e7aa28d52]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-3711bf63-c362-4020-b121-881e7aa28d52.15e6b5fbc8a170d5], Reason = [Started], Message = [Started container filler-pod-3711bf63-c362-4020-b121-881e7aa28d52]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-c92f400c-7198-4f91-9233-c05a1382f6bf.15e6b5f94cc01793], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2965/filler-pod-c92f400c-7198-4f91-9233-c05a1382f6bf to iruya-node]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-c92f400c-7198-4f91-9233-c05a1382f6bf.15e6b5fa9b862afb], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-c92f400c-7198-4f91-9233-c05a1382f6bf.15e6b5fbd82ac543], Reason = [Created], Message = [Created container filler-pod-c92f400c-7198-4f91-9233-c05a1382f6bf]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-c92f400c-7198-4f91-9233-c05a1382f6bf.15e6b5fc00e84dd0], Reason = [Started], Message = [Started container filler-pod-c92f400c-7198-4f91-9233-c05a1382f6bf]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.15e6b5fc92d0ca8a], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 Insufficient cpu.]
STEP: removing the label node off the node iruya-node
STEP: verifying the node doesn't have the label node
STEP: removing the label node off the node iruya-server-sfge57q7djm7
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 14:40:16.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-2965" for this suite.
Jan  4 14:40:23.018: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 14:40:23.265: INFO: namespace sched-pred-2965 deletion completed in 6.280921509s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:22.019 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run rc 
  should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 14:40:23.265: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1456
[It] should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan  4 14:40:25.054: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-7522'
Jan  4 14:40:29.191: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan  4 14:40:29.191: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created
STEP: confirm that you can get logs from an rc
Jan  4 14:40:29.243: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-szl6p]
Jan  4 14:40:29.244: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-szl6p" in namespace "kubectl-7522" to be "running and ready"
Jan  4 14:40:29.247: INFO: Pod "e2e-test-nginx-rc-szl6p": Phase="Pending", Reason="", readiness=false. Elapsed: 3.489252ms
Jan  4 14:40:31.262: INFO: Pod "e2e-test-nginx-rc-szl6p": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018044565s
Jan  4 14:40:33.273: INFO: Pod "e2e-test-nginx-rc-szl6p": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028913699s
Jan  4 14:40:35.281: INFO: Pod "e2e-test-nginx-rc-szl6p": Phase="Pending", Reason="", readiness=false. Elapsed: 6.036908739s
Jan  4 14:40:37.293: INFO: Pod "e2e-test-nginx-rc-szl6p": Phase="Pending", Reason="", readiness=false. Elapsed: 8.049076684s
Jan  4 14:40:39.298: INFO: Pod "e2e-test-nginx-rc-szl6p": Phase="Pending", Reason="", readiness=false. Elapsed: 10.054240431s
Jan  4 14:40:41.305: INFO: Pod "e2e-test-nginx-rc-szl6p": Phase="Running", Reason="", readiness=true. Elapsed: 12.0614759s
Jan  4 14:40:41.305: INFO: Pod "e2e-test-nginx-rc-szl6p" satisfied condition "running and ready"
Jan  4 14:40:41.305: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-szl6p]
Jan  4 14:40:41.306: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=kubectl-7522'
Jan  4 14:40:41.537: INFO: stderr: ""
Jan  4 14:40:41.537: INFO: stdout: ""
[AfterEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1461
Jan  4 14:40:41.538: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-7522'
Jan  4 14:40:41.701: INFO: stderr: ""
Jan  4 14:40:41.701: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 14:40:41.701: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7522" for this suite.
Jan  4 14:41:03.728: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 14:41:03.957: INFO: namespace kubectl-7522 deletion completed in 22.25191088s

• [SLOW TEST:40.692 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create an rc from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 14:41:03.959: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-f9g9v in namespace proxy-4684
I0104 14:41:04.349679       8 runners.go:180] Created replication controller with name: proxy-service-f9g9v, namespace: proxy-4684, replica count: 1
I0104 14:41:05.400800       8 runners.go:180] proxy-service-f9g9v Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0104 14:41:06.401226       8 runners.go:180] proxy-service-f9g9v Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0104 14:41:07.401590       8 runners.go:180] proxy-service-f9g9v Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0104 14:41:08.402026       8 runners.go:180] proxy-service-f9g9v Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0104 14:41:09.402462       8 runners.go:180] proxy-service-f9g9v Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0104 14:41:10.402987       8 runners.go:180] proxy-service-f9g9v Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0104 14:41:11.403374       8 runners.go:180] proxy-service-f9g9v Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0104 14:41:12.403715       8 runners.go:180] proxy-service-f9g9v Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0104 14:41:13.404035       8 runners.go:180] proxy-service-f9g9v Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0104 14:41:14.404398       8 runners.go:180] proxy-service-f9g9v Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0104 14:41:15.405035       8 runners.go:180] proxy-service-f9g9v Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0104 14:41:16.405734       8 runners.go:180] proxy-service-f9g9v Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0104 14:41:17.406135       8 runners.go:180] proxy-service-f9g9v Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0104 14:41:18.406511       8 runners.go:180] proxy-service-f9g9v Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0104 14:41:19.407029       8 runners.go:180] proxy-service-f9g9v Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0104 14:41:20.407489       8 runners.go:180] proxy-service-f9g9v Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0104 14:41:21.407846       8 runners.go:180] proxy-service-f9g9v Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0104 14:41:22.408180       8 runners.go:180] proxy-service-f9g9v Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jan  4 14:41:22.474: INFO: setup took 18.354608433s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Jan  4 14:41:22.521: INFO: (0) /api/v1/namespaces/proxy-4684/pods/proxy-service-f9g9v-xdjh4:162/proxy/: bar (200; 46.107344ms)
Jan  4 14:41:22.521: INFO: (0) /api/v1/namespaces/proxy-4684/pods/http:proxy-service-f9g9v-xdjh4:162/proxy/: bar (200; 46.101428ms)
Jan  4 14:41:22.521: INFO: (0) /api/v1/namespaces/proxy-4684/pods/proxy-service-f9g9v-xdjh4:160/proxy/: foo (200; 45.983237ms)
Jan  4 14:41:22.521: INFO: (0) /api/v1/namespaces/proxy-4684/pods/http:proxy-service-f9g9v-xdjh4:160/proxy/: foo (200; 46.316153ms)
Jan  4 14:41:22.521: INFO: (0) /api/v1/namespaces/proxy-4684/pods/proxy-service-f9g9v-xdjh4:1080/proxy/: test<... (200; 46.217073ms)
Jan  4 14:41:22.521: INFO: (0) /api/v1/namespaces/proxy-4684/pods/http:proxy-service-f9g9v-xdjh4:1080/proxy/: ... (200; 46.139108ms)
Jan  4 14:41:22.522: INFO: (0) /api/v1/namespaces/proxy-4684/services/http:proxy-service-f9g9v:portname2/proxy/: bar (200; 47.112647ms)
Jan  4 14:41:22.522: INFO: (0) /api/v1/namespaces/proxy-4684/services/proxy-service-f9g9v:portname2/proxy/: bar (200; 47.088006ms)
Jan  4 14:41:22.522: INFO: (0) /api/v1/namespaces/proxy-4684/services/http:proxy-service-f9g9v:portname1/proxy/: foo (200; 47.112113ms)
Jan  4 14:41:22.523: INFO: (0) /api/v1/namespaces/proxy-4684/services/proxy-service-f9g9v:portname1/proxy/: foo (200; 48.621649ms)
Jan  4 14:41:22.523: INFO: (0) /api/v1/namespaces/proxy-4684/pods/proxy-service-f9g9v-xdjh4/proxy/: test (200; 48.395693ms)
Jan  4 14:41:22.544: INFO: (0) /api/v1/namespaces/proxy-4684/pods/https:proxy-service-f9g9v-xdjh4:443/proxy/: test (200; 25.869372ms)
Jan  4 14:41:22.574: INFO: (1) /api/v1/namespaces/proxy-4684/pods/proxy-service-f9g9v-xdjh4:160/proxy/: foo (200; 25.890376ms)
Jan  4 14:41:22.575: INFO: (1) /api/v1/namespaces/proxy-4684/pods/https:proxy-service-f9g9v-xdjh4:462/proxy/: tls qux (200; 26.88283ms)
Jan  4 14:41:22.585: INFO: (1) /api/v1/namespaces/proxy-4684/pods/https:proxy-service-f9g9v-xdjh4:443/proxy/: test<... (200; 37.470949ms)
Jan  4 14:41:22.586: INFO: (1) /api/v1/namespaces/proxy-4684/services/https:proxy-service-f9g9v:tlsportname2/proxy/: tls qux (200; 37.632025ms)
Jan  4 14:41:22.586: INFO: (1) /api/v1/namespaces/proxy-4684/pods/proxy-service-f9g9v-xdjh4:162/proxy/: bar (200; 37.563562ms)
Jan  4 14:41:22.586: INFO: (1) /api/v1/namespaces/proxy-4684/services/http:proxy-service-f9g9v:portname1/proxy/: foo (200; 38.300569ms)
Jan  4 14:41:22.586: INFO: (1) /api/v1/namespaces/proxy-4684/services/proxy-service-f9g9v:portname1/proxy/: foo (200; 38.326257ms)
Jan  4 14:41:22.586: INFO: (1) /api/v1/namespaces/proxy-4684/pods/http:proxy-service-f9g9v-xdjh4:1080/proxy/: ... (200; 38.575792ms)
Jan  4 14:41:22.587: INFO: (1) /api/v1/namespaces/proxy-4684/services/proxy-service-f9g9v:portname2/proxy/: bar (200; 38.494518ms)
Jan  4 14:41:22.587: INFO: (1) /api/v1/namespaces/proxy-4684/services/https:proxy-service-f9g9v:tlsportname1/proxy/: tls baz (200; 39.010051ms)
Jan  4 14:41:22.587: INFO: (1) /api/v1/namespaces/proxy-4684/pods/https:proxy-service-f9g9v-xdjh4:460/proxy/: tls baz (200; 39.128837ms)
Jan  4 14:41:22.588: INFO: (1) /api/v1/namespaces/proxy-4684/services/http:proxy-service-f9g9v:portname2/proxy/: bar (200; 39.62703ms)
Jan  4 14:41:22.612: INFO: (2) /api/v1/namespaces/proxy-4684/pods/https:proxy-service-f9g9v-xdjh4:462/proxy/: tls qux (200; 23.950875ms)
Jan  4 14:41:22.613: INFO: (2) /api/v1/namespaces/proxy-4684/pods/proxy-service-f9g9v-xdjh4:1080/proxy/: test<... (200; 24.47198ms)
Jan  4 14:41:22.615: INFO: (2) /api/v1/namespaces/proxy-4684/services/proxy-service-f9g9v:portname2/proxy/: bar (200; 26.859816ms)
Jan  4 14:41:22.615: INFO: (2) /api/v1/namespaces/proxy-4684/pods/https:proxy-service-f9g9v-xdjh4:443/proxy/: ... (200; 27.047081ms)
Jan  4 14:41:22.615: INFO: (2) /api/v1/namespaces/proxy-4684/pods/https:proxy-service-f9g9v-xdjh4:460/proxy/: tls baz (200; 26.579545ms)
Jan  4 14:41:22.615: INFO: (2) /api/v1/namespaces/proxy-4684/pods/http:proxy-service-f9g9v-xdjh4:160/proxy/: foo (200; 26.599691ms)
Jan  4 14:41:22.615: INFO: (2) /api/v1/namespaces/proxy-4684/pods/proxy-service-f9g9v-xdjh4:162/proxy/: bar (200; 26.758483ms)
Jan  4 14:41:22.615: INFO: (2) /api/v1/namespaces/proxy-4684/services/https:proxy-service-f9g9v:tlsportname1/proxy/: tls baz (200; 26.696714ms)
Jan  4 14:41:22.615: INFO: (2) /api/v1/namespaces/proxy-4684/services/https:proxy-service-f9g9v:tlsportname2/proxy/: tls qux (200; 27.223008ms)
Jan  4 14:41:22.615: INFO: (2) /api/v1/namespaces/proxy-4684/pods/proxy-service-f9g9v-xdjh4:160/proxy/: foo (200; 27.138815ms)
Jan  4 14:41:22.615: INFO: (2) /api/v1/namespaces/proxy-4684/pods/proxy-service-f9g9v-xdjh4/proxy/: test (200; 26.66729ms)
Jan  4 14:41:22.616: INFO: (2) /api/v1/namespaces/proxy-4684/services/proxy-service-f9g9v:portname1/proxy/: foo (200; 27.383667ms)
Jan  4 14:41:22.616: INFO: (2) /api/v1/namespaces/proxy-4684/pods/http:proxy-service-f9g9v-xdjh4:162/proxy/: bar (200; 27.959698ms)
Jan  4 14:41:22.616: INFO: (2) /api/v1/namespaces/proxy-4684/services/http:proxy-service-f9g9v:portname1/proxy/: foo (200; 27.95607ms)
Jan  4 14:41:22.616: INFO: (2) /api/v1/namespaces/proxy-4684/services/http:proxy-service-f9g9v:portname2/proxy/: bar (200; 27.885341ms)
Jan  4 14:41:22.632: INFO: (3) /api/v1/namespaces/proxy-4684/pods/https:proxy-service-f9g9v-xdjh4:460/proxy/: tls baz (200; 13.455022ms)
Jan  4 14:41:22.632: INFO: (3) /api/v1/namespaces/proxy-4684/pods/proxy-service-f9g9v-xdjh4/proxy/: test (200; 14.617667ms)
Jan  4 14:41:22.632: INFO: (3) /api/v1/namespaces/proxy-4684/pods/http:proxy-service-f9g9v-xdjh4:1080/proxy/: ... (200; 15.205709ms)
Jan  4 14:41:22.634: INFO: (3) /api/v1/namespaces/proxy-4684/pods/https:proxy-service-f9g9v-xdjh4:443/proxy/: test<... (200; 16.18677ms)
Jan  4 14:41:22.635: INFO: (3) /api/v1/namespaces/proxy-4684/pods/http:proxy-service-f9g9v-xdjh4:160/proxy/: foo (200; 17.288808ms)
Jan  4 14:41:22.635: INFO: (3) /api/v1/namespaces/proxy-4684/services/https:proxy-service-f9g9v:tlsportname1/proxy/: tls baz (200; 16.31574ms)
Jan  4 14:41:22.635: INFO: (3) /api/v1/namespaces/proxy-4684/pods/proxy-service-f9g9v-xdjh4:162/proxy/: bar (200; 17.127021ms)
Jan  4 14:41:22.635: INFO: (3) /api/v1/namespaces/proxy-4684/pods/proxy-service-f9g9v-xdjh4:160/proxy/: foo (200; 16.51186ms)
Jan  4 14:41:22.636: INFO: (3) /api/v1/namespaces/proxy-4684/services/http:proxy-service-f9g9v:portname2/proxy/: bar (200; 17.916679ms)
Jan  4 14:41:22.637: INFO: (3) /api/v1/namespaces/proxy-4684/services/https:proxy-service-f9g9v:tlsportname2/proxy/: tls qux (200; 18.466731ms)
Jan  4 14:41:22.640: INFO: (3) /api/v1/namespaces/proxy-4684/services/proxy-service-f9g9v:portname1/proxy/: foo (200; 21.598702ms)
Jan  4 14:41:22.640: INFO: (3) /api/v1/namespaces/proxy-4684/services/proxy-service-f9g9v:portname2/proxy/: bar (200; 21.492637ms)
Jan  4 14:41:22.640: INFO: (3) /api/v1/namespaces/proxy-4684/services/http:proxy-service-f9g9v:portname1/proxy/: foo (200; 21.966744ms)
Jan  4 14:41:22.648: INFO: (4) /api/v1/namespaces/proxy-4684/pods/http:proxy-service-f9g9v-xdjh4:1080/proxy/: ... (200; 7.951402ms)
Jan  4 14:41:22.648: INFO: (4) /api/v1/namespaces/proxy-4684/pods/https:proxy-service-f9g9v-xdjh4:443/proxy/: test<... (200; 10.707995ms)
Jan  4 14:41:22.651: INFO: (4) /api/v1/namespaces/proxy-4684/pods/proxy-service-f9g9v-xdjh4:162/proxy/: bar (200; 10.676853ms)
Jan  4 14:41:22.651: INFO: (4) /api/v1/namespaces/proxy-4684/pods/https:proxy-service-f9g9v-xdjh4:462/proxy/: tls qux (200; 10.669453ms)
Jan  4 14:41:22.651: INFO: (4) /api/v1/namespaces/proxy-4684/pods/proxy-service-f9g9v-xdjh4/proxy/: test (200; 11.062791ms)
Jan  4 14:41:22.652: INFO: (4) /api/v1/namespaces/proxy-4684/pods/proxy-service-f9g9v-xdjh4:160/proxy/: foo (200; 11.315088ms)
Jan  4 14:41:22.652: INFO: (4) /api/v1/namespaces/proxy-4684/pods/http:proxy-service-f9g9v-xdjh4:162/proxy/: bar (200; 11.478378ms)
Jan  4 14:41:22.655: INFO: (4) /api/v1/namespaces/proxy-4684/services/https:proxy-service-f9g9v:tlsportname2/proxy/: tls qux (200; 14.691496ms)
Jan  4 14:41:22.655: INFO: (4) /api/v1/namespaces/proxy-4684/pods/https:proxy-service-f9g9v-xdjh4:460/proxy/: tls baz (200; 14.951348ms)
Jan  4 14:41:22.656: INFO: (4) /api/v1/namespaces/proxy-4684/services/proxy-service-f9g9v:portname2/proxy/: bar (200; 15.448263ms)
Jan  4 14:41:22.656: INFO: (4) /api/v1/namespaces/proxy-4684/services/http:proxy-service-f9g9v:portname1/proxy/: foo (200; 15.889249ms)
Jan  4 14:41:22.656: INFO: (4) /api/v1/namespaces/proxy-4684/services/proxy-service-f9g9v:portname1/proxy/: foo (200; 15.918499ms)
Jan  4 14:41:22.657: INFO: (4) /api/v1/namespaces/proxy-4684/services/https:proxy-service-f9g9v:tlsportname1/proxy/: tls baz (200; 16.509371ms)
Jan  4 14:41:22.658: INFO: (4) /api/v1/namespaces/proxy-4684/services/http:proxy-service-f9g9v:portname2/proxy/: bar (200; 17.142282ms)
Jan  4 14:41:22.663: INFO: (5) /api/v1/namespaces/proxy-4684/pods/proxy-service-f9g9v-xdjh4:1080/proxy/: test<... (200; 5.058559ms)
Jan  4 14:41:22.670: INFO: (5) /api/v1/namespaces/proxy-4684/pods/http:proxy-service-f9g9v-xdjh4:162/proxy/: bar (200; 11.594853ms)
Jan  4 14:41:22.670: INFO: (5) /api/v1/namespaces/proxy-4684/pods/http:proxy-service-f9g9v-xdjh4:1080/proxy/: ... (200; 11.727465ms)
Jan  4 14:41:22.672: INFO: (5) /api/v1/namespaces/proxy-4684/pods/proxy-service-f9g9v-xdjh4:162/proxy/: bar (200; 13.530592ms)
Jan  4 14:41:22.672: INFO: (5) /api/v1/namespaces/proxy-4684/pods/http:proxy-service-f9g9v-xdjh4:160/proxy/: foo (200; 13.396337ms)
Jan  4 14:41:22.672: INFO: (5) /api/v1/namespaces/proxy-4684/services/proxy-service-f9g9v:portname2/proxy/: bar (200; 14.63088ms)
Jan  4 14:41:22.674: INFO: (5) /api/v1/namespaces/proxy-4684/pods/https:proxy-service-f9g9v-xdjh4:443/proxy/: test (200; 16.430889ms)
Jan  4 14:41:22.675: INFO: (5) /api/v1/namespaces/proxy-4684/pods/proxy-service-f9g9v-xdjh4:160/proxy/: foo (200; 17.16399ms)
Jan  4 14:41:22.677: INFO: (5) /api/v1/namespaces/proxy-4684/services/http:proxy-service-f9g9v:portname1/proxy/: foo (200; 18.329875ms)
Jan  4 14:41:22.677: INFO: (5) /api/v1/namespaces/proxy-4684/services/proxy-service-f9g9v:portname1/proxy/: foo (200; 18.20874ms)
Jan  4 14:41:22.679: INFO: (5) /api/v1/namespaces/proxy-4684/services/http:proxy-service-f9g9v:portname2/proxy/: bar (200; 19.936372ms)
Jan  4 14:41:22.679: INFO: (5) /api/v1/namespaces/proxy-4684/services/https:proxy-service-f9g9v:tlsportname1/proxy/: tls baz (200; 20.907475ms)
Jan  4 14:41:22.679: INFO: (5) /api/v1/namespaces/proxy-4684/pods/https:proxy-service-f9g9v-xdjh4:462/proxy/: tls qux (200; 21.162171ms)
Jan  4 14:41:22.680: INFO: (5) /api/v1/namespaces/proxy-4684/services/https:proxy-service-f9g9v:tlsportname2/proxy/: tls qux (200; 21.682491ms)
Jan  4 14:41:22.680: INFO: (5) /api/v1/namespaces/proxy-4684/pods/https:proxy-service-f9g9v-xdjh4:460/proxy/: tls baz (200; 21.871587ms)
Jan  4 14:41:22.691: INFO: (6) /api/v1/namespaces/proxy-4684/pods/https:proxy-service-f9g9v-xdjh4:443/proxy/: test (200; 14.762155ms)
Jan  4 14:41:22.696: INFO: (6) /api/v1/namespaces/proxy-4684/pods/http:proxy-service-f9g9v-xdjh4:1080/proxy/: ... (200; 14.991943ms)
Jan  4 14:41:22.696: INFO: (6) /api/v1/namespaces/proxy-4684/pods/proxy-service-f9g9v-xdjh4:1080/proxy/: test<... (200; 15.024869ms)
Jan  4 14:41:22.698: INFO: (6) /api/v1/namespaces/proxy-4684/services/http:proxy-service-f9g9v:portname2/proxy/: bar (200; 17.24962ms)
Jan  4 14:41:22.699: INFO: (6) /api/v1/namespaces/proxy-4684/services/proxy-service-f9g9v:portname2/proxy/: bar (200; 18.426223ms)
Jan  4 14:41:22.700: INFO: (6) /api/v1/namespaces/proxy-4684/services/http:proxy-service-f9g9v:portname1/proxy/: foo (200; 19.290223ms)
Jan  4 14:41:22.716: INFO: (7) /api/v1/namespaces/proxy-4684/pods/http:proxy-service-f9g9v-xdjh4:1080/proxy/: ... (200; 15.362432ms)
Jan  4 14:41:22.716: INFO: (7) /api/v1/namespaces/proxy-4684/pods/proxy-service-f9g9v-xdjh4:162/proxy/: bar (200; 15.41657ms)
Jan  4 14:41:22.716: INFO: (7) /api/v1/namespaces/proxy-4684/pods/http:proxy-service-f9g9v-xdjh4:162/proxy/: bar (200; 15.400515ms)
Jan  4 14:41:22.716: INFO: (7) /api/v1/namespaces/proxy-4684/pods/proxy-service-f9g9v-xdjh4:160/proxy/: foo (200; 15.571672ms)
Jan  4 14:41:22.716: INFO: (7) /api/v1/namespaces/proxy-4684/pods/http:proxy-service-f9g9v-xdjh4:160/proxy/: foo (200; 15.398331ms)
Jan  4 14:41:22.718: INFO: (7) /api/v1/namespaces/proxy-4684/pods/https:proxy-service-f9g9v-xdjh4:443/proxy/: test (200; 17.098831ms)
Jan  4 14:41:22.721: INFO: (7) /api/v1/namespaces/proxy-4684/services/proxy-service-f9g9v:portname1/proxy/: foo (200; 19.748276ms)
Jan  4 14:41:22.721: INFO: (7) /api/v1/namespaces/proxy-4684/pods/https:proxy-service-f9g9v-xdjh4:462/proxy/: tls qux (200; 20.377826ms)
Jan  4 14:41:22.722: INFO: (7) /api/v1/namespaces/proxy-4684/services/https:proxy-service-f9g9v:tlsportname1/proxy/: tls baz (200; 21.315695ms)
Jan  4 14:41:22.722: INFO: (7) /api/v1/namespaces/proxy-4684/pods/proxy-service-f9g9v-xdjh4:1080/proxy/: test<... (200; 21.311066ms)
Jan  4 14:41:22.723: INFO: (7) /api/v1/namespaces/proxy-4684/pods/https:proxy-service-f9g9v-xdjh4:460/proxy/: tls baz (200; 22.665042ms)
Jan  4 14:41:22.723: INFO: (7) /api/v1/namespaces/proxy-4684/services/http:proxy-service-f9g9v:portname2/proxy/: bar (200; 22.357095ms)
Jan  4 14:41:22.724: INFO: (7) /api/v1/namespaces/proxy-4684/services/proxy-service-f9g9v:portname2/proxy/: bar (200; 22.868763ms)
Jan  4 14:41:22.724: INFO: (7) /api/v1/namespaces/proxy-4684/services/https:proxy-service-f9g9v:tlsportname2/proxy/: tls qux (200; 23.614927ms)
Jan  4 14:41:22.725: INFO: (7) /api/v1/namespaces/proxy-4684/services/http:proxy-service-f9g9v:portname1/proxy/: foo (200; 24.451985ms)
Jan  4 14:41:22.745: INFO: (8) /api/v1/namespaces/proxy-4684/pods/proxy-service-f9g9v-xdjh4:160/proxy/: foo (200; 18.860238ms)
Jan  4 14:41:22.750: INFO: (8) /api/v1/namespaces/proxy-4684/pods/http:proxy-service-f9g9v-xdjh4:1080/proxy/: ... (200; 23.825671ms)
Jan  4 14:41:22.752: INFO: (8) /api/v1/namespaces/proxy-4684/pods/https:proxy-service-f9g9v-xdjh4:443/proxy/: test<... (200; 26.798694ms)
Jan  4 14:41:22.755: INFO: (8) /api/v1/namespaces/proxy-4684/pods/https:proxy-service-f9g9v-xdjh4:462/proxy/: tls qux (200; 29.245065ms)
Jan  4 14:41:22.755: INFO: (8) /api/v1/namespaces/proxy-4684/pods/http:proxy-service-f9g9v-xdjh4:160/proxy/: foo (200; 29.485723ms)
Jan  4 14:41:22.755: INFO: (8) /api/v1/namespaces/proxy-4684/services/https:proxy-service-f9g9v:tlsportname2/proxy/: tls qux (200; 29.604901ms)
Jan  4 14:41:22.756: INFO: (8) /api/v1/namespaces/proxy-4684/pods/proxy-service-f9g9v-xdjh4/proxy/: test (200; 30.439134ms)
Jan  4 14:41:22.756: INFO: (8) /api/v1/namespaces/proxy-4684/pods/http:proxy-service-f9g9v-xdjh4:162/proxy/: bar (200; 30.461001ms)
Jan  4 14:41:22.757: INFO: (8) /api/v1/namespaces/proxy-4684/services/http:proxy-service-f9g9v:portname1/proxy/: foo (200; 31.218593ms)
Jan  4 14:41:22.757: INFO: (8) /api/v1/namespaces/proxy-4684/services/https:proxy-service-f9g9v:tlsportname1/proxy/: tls baz (200; 31.199962ms)
Jan  4 14:41:22.757: INFO: (8) /api/v1/namespaces/proxy-4684/services/http:proxy-service-f9g9v:portname2/proxy/: bar (200; 31.418312ms)
Jan  4 14:41:22.758: INFO: (8) /api/v1/namespaces/proxy-4684/services/proxy-service-f9g9v:portname2/proxy/: bar (200; 32.283774ms)
Jan  4 14:41:22.758: INFO: (8) /api/v1/namespaces/proxy-4684/services/proxy-service-f9g9v:portname1/proxy/: foo (200; 31.994301ms)
Jan  4 14:41:22.758: INFO: (8) /api/v1/namespaces/proxy-4684/pods/https:proxy-service-f9g9v-xdjh4:460/proxy/: tls baz (200; 32.097305ms)
Jan  4 14:41:22.803: INFO: (9) /api/v1/namespaces/proxy-4684/pods/https:proxy-service-f9g9v-xdjh4:462/proxy/: tls qux (200; 44.615434ms)
Jan  4 14:41:22.805: INFO: (9) /api/v1/namespaces/proxy-4684/pods/https:proxy-service-f9g9v-xdjh4:443/proxy/: ... (200; 47.004534ms)
Jan  4 14:41:22.806: INFO: (9) /api/v1/namespaces/proxy-4684/services/proxy-service-f9g9v:portname1/proxy/: foo (200; 47.218801ms)
Jan  4 14:41:22.806: INFO: (9) /api/v1/namespaces/proxy-4684/services/proxy-service-f9g9v:portname2/proxy/: bar (200; 47.053967ms)
Jan  4 14:41:22.806: INFO: (9) /api/v1/namespaces/proxy-4684/pods/proxy-service-f9g9v-xdjh4:160/proxy/: foo (200; 47.32008ms)
Jan  4 14:41:22.806: INFO: (9) /api/v1/namespaces/proxy-4684/pods/proxy-service-f9g9v-xdjh4:1080/proxy/: test<... (200; 47.345864ms)
Jan  4 14:41:22.806: INFO: (9) /api/v1/namespaces/proxy-4684/pods/https:proxy-service-f9g9v-xdjh4:460/proxy/: tls baz (200; 47.030927ms)
Jan  4 14:41:22.806: INFO: (9) /api/v1/namespaces/proxy-4684/services/https:proxy-service-f9g9v:tlsportname1/proxy/: tls baz (200; 47.351194ms)
Jan  4 14:41:22.806: INFO: (9) /api/v1/namespaces/proxy-4684/pods/proxy-service-f9g9v-xdjh4:162/proxy/: bar (200; 47.123161ms)
Jan  4 14:41:22.806: INFO: (9) /api/v1/namespaces/proxy-4684/pods/proxy-service-f9g9v-xdjh4/proxy/: test (200; 47.096339ms)
Jan  4 14:41:22.807: INFO: (9) /api/v1/namespaces/proxy-4684/services/https:proxy-service-f9g9v:tlsportname2/proxy/: tls qux (200; 47.921071ms)
Jan  4 14:41:22.810: INFO: (9) /api/v1/namespaces/proxy-4684/pods/http:proxy-service-f9g9v-xdjh4:160/proxy/: foo (200; 51.110222ms)
Jan  4 14:41:22.832: INFO: (10) /api/v1/namespaces/proxy-4684/pods/http:proxy-service-f9g9v-xdjh4:162/proxy/: bar (200; 21.297087ms)
Jan  4 14:41:22.832: INFO: (10) /api/v1/namespaces/proxy-4684/pods/proxy-service-f9g9v-xdjh4/proxy/: test (200; 21.651697ms)
Jan  4 14:41:22.832: INFO: (10) /api/v1/namespaces/proxy-4684/pods/https:proxy-service-f9g9v-xdjh4:460/proxy/: tls baz (200; 21.796904ms)
Jan  4 14:41:22.832: INFO: (10) /api/v1/namespaces/proxy-4684/services/https:proxy-service-f9g9v:tlsportname1/proxy/: tls baz (200; 22.295864ms)
Jan  4 14:41:22.832: INFO: (10) /api/v1/namespaces/proxy-4684/services/http:proxy-service-f9g9v:portname1/proxy/: foo (200; 21.57459ms)
Jan  4 14:41:22.834: INFO: (10) /api/v1/namespaces/proxy-4684/pods/http:proxy-service-f9g9v-xdjh4:160/proxy/: foo (200; 24.315342ms)
Jan  4 14:41:22.834: INFO: (10) /api/v1/namespaces/proxy-4684/services/proxy-service-f9g9v:portname1/proxy/: foo (200; 24.064879ms)
Jan  4 14:41:22.835: INFO: (10) /api/v1/namespaces/proxy-4684/pods/https:proxy-service-f9g9v-xdjh4:462/proxy/: tls qux (200; 24.241853ms)
Jan  4 14:41:22.835: INFO: (10) /api/v1/namespaces/proxy-4684/services/http:proxy-service-f9g9v:portname2/proxy/: bar (200; 24.095743ms)
Jan  4 14:41:22.835: INFO: (10) /api/v1/namespaces/proxy-4684/pods/proxy-service-f9g9v-xdjh4:160/proxy/: foo (200; 23.824144ms)
Jan  4 14:41:22.835: INFO: (10) /api/v1/namespaces/proxy-4684/pods/http:proxy-service-f9g9v-xdjh4:1080/proxy/: ... (200; 24.69264ms)
Jan  4 14:41:22.835: INFO: (10) /api/v1/namespaces/proxy-4684/pods/proxy-service-f9g9v-xdjh4:1080/proxy/: test<... (200; 24.198127ms)
Jan  4 14:41:22.835: INFO: (10) /api/v1/namespaces/proxy-4684/services/proxy-service-f9g9v:portname2/proxy/: bar (200; 24.693152ms)
Jan  4 14:41:22.835: INFO: (10) /api/v1/namespaces/proxy-4684/pods/https:proxy-service-f9g9v-xdjh4:443/proxy/: test<... (200; 16.512422ms)
Jan  4 14:41:22.854: INFO: (11) /api/v1/namespaces/proxy-4684/pods/http:proxy-service-f9g9v-xdjh4:162/proxy/: bar (200; 17.813349ms)
Jan  4 14:41:22.860: INFO: (11) /api/v1/namespaces/proxy-4684/pods/proxy-service-f9g9v-xdjh4:160/proxy/: foo (200; 23.518217ms)
Jan  4 14:41:22.860: INFO: (11) /api/v1/namespaces/proxy-4684/services/http:proxy-service-f9g9v:portname1/proxy/: foo (200; 23.620451ms)
Jan  4 14:41:22.861: INFO: (11) /api/v1/namespaces/proxy-4684/pods/https:proxy-service-f9g9v-xdjh4:460/proxy/: tls baz (200; 23.973011ms)
Jan  4 14:41:22.862: INFO: (11) /api/v1/namespaces/proxy-4684/services/https:proxy-service-f9g9v:tlsportname1/proxy/: tls baz (200; 24.744154ms)
Jan  4 14:41:22.863: INFO: (11) /api/v1/namespaces/proxy-4684/pods/http:proxy-service-f9g9v-xdjh4:1080/proxy/: ... (200; 26.061547ms)
Jan  4 14:41:22.864: INFO: (11) /api/v1/namespaces/proxy-4684/services/proxy-service-f9g9v:portname1/proxy/: foo (200; 26.466283ms)
Jan  4 14:41:22.864: INFO: (11) /api/v1/namespaces/proxy-4684/pods/proxy-service-f9g9v-xdjh4:162/proxy/: bar (200; 26.633103ms)
Jan  4 14:41:22.865: INFO: (11) /api/v1/namespaces/proxy-4684/pods/https:proxy-service-f9g9v-xdjh4:443/proxy/: test (200; 28.863652ms)
Jan  4 14:41:22.867: INFO: (11) /api/v1/namespaces/proxy-4684/services/proxy-service-f9g9v:portname2/proxy/: bar (200; 31.364996ms)
Jan  4 14:41:22.867: INFO: (11) /api/v1/namespaces/proxy-4684/services/http:proxy-service-f9g9v:portname2/proxy/: bar (200; 32.163395ms)
Jan  4 14:41:22.892: INFO: (12) /api/v1/namespaces/proxy-4684/pods/https:proxy-service-f9g9v-xdjh4:460/proxy/: tls baz (200; 24.224084ms)
Jan  4 14:41:22.893: INFO: (12) /api/v1/namespaces/proxy-4684/pods/proxy-service-f9g9v-xdjh4:1080/proxy/: test<... (200; 25.148444ms)
Jan  4 14:41:22.894: INFO: (12) /api/v1/namespaces/proxy-4684/pods/http:proxy-service-f9g9v-xdjh4:162/proxy/: bar (200; 26.241619ms)
Jan  4 14:41:22.894: INFO: (12) /api/v1/namespaces/proxy-4684/pods/proxy-service-f9g9v-xdjh4:160/proxy/: foo (200; 26.235661ms)
Jan  4 14:41:22.894: INFO: (12) /api/v1/namespaces/proxy-4684/pods/proxy-service-f9g9v-xdjh4:162/proxy/: bar (200; 26.173672ms)
Jan  4 14:41:22.894: INFO: (12) /api/v1/namespaces/proxy-4684/pods/http:proxy-service-f9g9v-xdjh4:1080/proxy/: ... (200; 25.932935ms)
Jan  4 14:41:22.894: INFO: (12) /api/v1/namespaces/proxy-4684/pods/https:proxy-service-f9g9v-xdjh4:443/proxy/: test (200; 27.925012ms)
Jan  4 14:41:22.896: INFO: (12) /api/v1/namespaces/proxy-4684/services/https:proxy-service-f9g9v:tlsportname1/proxy/: tls baz (200; 27.771148ms)
Jan  4 14:41:22.896: INFO: (12) /api/v1/namespaces/proxy-4684/pods/https:proxy-service-f9g9v-xdjh4:462/proxy/: tls qux (200; 27.959291ms)
Jan  4 14:41:22.900: INFO: (12) /api/v1/namespaces/proxy-4684/services/http:proxy-service-f9g9v:portname2/proxy/: bar (200; 31.993392ms)
Jan  4 14:41:22.900: INFO: (12) /api/v1/namespaces/proxy-4684/services/proxy-service-f9g9v:portname2/proxy/: bar (200; 31.758881ms)
Jan  4 14:41:22.900: INFO: (12) /api/v1/namespaces/proxy-4684/services/https:proxy-service-f9g9v:tlsportname2/proxy/: tls qux (200; 32.014838ms)
Jan  4 14:41:22.900: INFO: (12) /api/v1/namespaces/proxy-4684/services/proxy-service-f9g9v:portname1/proxy/: foo (200; 32.495746ms)
Jan  4 14:41:22.906: INFO: (12) /api/v1/namespaces/proxy-4684/services/http:proxy-service-f9g9v:portname1/proxy/: foo (200; 37.619209ms)
Jan  4 14:41:22.916: INFO: (13) /api/v1/namespaces/proxy-4684/pods/proxy-service-f9g9v-xdjh4:162/proxy/: bar (200; 9.744881ms)
Jan  4 14:41:22.916: INFO: (13) /api/v1/namespaces/proxy-4684/pods/http:proxy-service-f9g9v-xdjh4:160/proxy/: foo (200; 9.857418ms)
Jan  4 14:41:22.917: INFO: (13) /api/v1/namespaces/proxy-4684/services/proxy-service-f9g9v:portname2/proxy/: bar (200; 10.70088ms)
Jan  4 14:41:22.917: INFO: (13) /api/v1/namespaces/proxy-4684/pods/http:proxy-service-f9g9v-xdjh4:1080/proxy/: ... (200; 10.947309ms)
Jan  4 14:41:22.917: INFO: (13) /api/v1/namespaces/proxy-4684/pods/proxy-service-f9g9v-xdjh4:1080/proxy/: test<... (200; 11.419549ms)
Jan  4 14:41:22.918: INFO: (13) /api/v1/namespaces/proxy-4684/pods/https:proxy-service-f9g9v-xdjh4:460/proxy/: tls baz (200; 11.512326ms)
Jan  4 14:41:22.918: INFO: (13) /api/v1/namespaces/proxy-4684/services/https:proxy-service-f9g9v:tlsportname2/proxy/: tls qux (200; 11.866839ms)
Jan  4 14:41:22.918: INFO: (13) /api/v1/namespaces/proxy-4684/pods/https:proxy-service-f9g9v-xdjh4:462/proxy/: tls qux (200; 11.686368ms)
Jan  4 14:41:22.918: INFO: (13) /api/v1/namespaces/proxy-4684/pods/https:proxy-service-f9g9v-xdjh4:443/proxy/: test (200; 11.980086ms)
Jan  4 14:41:22.918: INFO: (13) /api/v1/namespaces/proxy-4684/services/proxy-service-f9g9v:portname1/proxy/: foo (200; 12.033728ms)
Jan  4 14:41:22.918: INFO: (13) /api/v1/namespaces/proxy-4684/pods/proxy-service-f9g9v-xdjh4:160/proxy/: foo (200; 12.31813ms)
Jan  4 14:41:22.918: INFO: (13) /api/v1/namespaces/proxy-4684/pods/http:proxy-service-f9g9v-xdjh4:162/proxy/: bar (200; 12.159378ms)
Jan  4 14:41:22.918: INFO: (13) /api/v1/namespaces/proxy-4684/services/https:proxy-service-f9g9v:tlsportname1/proxy/: tls baz (200; 12.116237ms)
Jan  4 14:41:22.920: INFO: (13) /api/v1/namespaces/proxy-4684/services/http:proxy-service-f9g9v:portname2/proxy/: bar (200; 13.881702ms)
Jan  4 14:41:22.920: INFO: (13) /api/v1/namespaces/proxy-4684/services/http:proxy-service-f9g9v:portname1/proxy/: foo (200; 14.604862ms)
Jan  4 14:41:22.924: INFO: (14) /api/v1/namespaces/proxy-4684/pods/https:proxy-service-f9g9v-xdjh4:443/proxy/: test<... (200; 5.254047ms)
Jan  4 14:41:22.926: INFO: (14) /api/v1/namespaces/proxy-4684/pods/proxy-service-f9g9v-xdjh4:162/proxy/: bar (200; 5.214304ms)
Jan  4 14:41:22.928: INFO: (14) /api/v1/namespaces/proxy-4684/services/https:proxy-service-f9g9v:tlsportname1/proxy/: tls baz (200; 7.385166ms)
Jan  4 14:41:22.928: INFO: (14) /api/v1/namespaces/proxy-4684/services/proxy-service-f9g9v:portname1/proxy/: foo (200; 7.592058ms)
Jan  4 14:41:22.928: INFO: (14) /api/v1/namespaces/proxy-4684/pods/http:proxy-service-f9g9v-xdjh4:160/proxy/: foo (200; 7.795801ms)
Jan  4 14:41:22.929: INFO: (14) /api/v1/namespaces/proxy-4684/pods/http:proxy-service-f9g9v-xdjh4:162/proxy/: bar (200; 7.900664ms)
Jan  4 14:41:22.929: INFO: (14) /api/v1/namespaces/proxy-4684/pods/https:proxy-service-f9g9v-xdjh4:462/proxy/: tls qux (200; 8.03028ms)
Jan  4 14:41:22.929: INFO: (14) /api/v1/namespaces/proxy-4684/pods/https:proxy-service-f9g9v-xdjh4:460/proxy/: tls baz (200; 8.27224ms)
Jan  4 14:41:22.929: INFO: (14) /api/v1/namespaces/proxy-4684/pods/proxy-service-f9g9v-xdjh4:160/proxy/: foo (200; 8.268267ms)
Jan  4 14:41:22.929: INFO: (14) /api/v1/namespaces/proxy-4684/pods/proxy-service-f9g9v-xdjh4/proxy/: test (200; 8.285344ms)
Jan  4 14:41:22.929: INFO: (14) /api/v1/namespaces/proxy-4684/pods/http:proxy-service-f9g9v-xdjh4:1080/proxy/: ... (200; 8.266242ms)
Jan  4 14:41:22.934: INFO: (14) /api/v1/namespaces/proxy-4684/services/https:proxy-service-f9g9v:tlsportname2/proxy/: tls qux (200; 13.428271ms)
Jan  4 14:41:22.934: INFO: (14) /api/v1/namespaces/proxy-4684/services/http:proxy-service-f9g9v:portname2/proxy/: bar (200; 13.468264ms)
Jan  4 14:41:22.934: INFO: (14) /api/v1/namespaces/proxy-4684/services/proxy-service-f9g9v:portname2/proxy/: bar (200; 13.632711ms)
Jan  4 14:41:22.934: INFO: (14) /api/v1/namespaces/proxy-4684/services/http:proxy-service-f9g9v:portname1/proxy/: foo (200; 13.678186ms)
Jan  4 14:41:22.938: INFO: (15) /api/v1/namespaces/proxy-4684/pods/proxy-service-f9g9v-xdjh4:162/proxy/: bar (200; 3.565715ms)
Jan  4 14:41:22.938: INFO: (15) /api/v1/namespaces/proxy-4684/pods/http:proxy-service-f9g9v-xdjh4:1080/proxy/: ... (200; 4.126909ms)
Jan  4 14:41:22.944: INFO: (15) /api/v1/namespaces/proxy-4684/pods/https:proxy-service-f9g9v-xdjh4:443/proxy/: test<... (200; 10.980855ms)
Jan  4 14:41:22.946: INFO: (15) /api/v1/namespaces/proxy-4684/pods/proxy-service-f9g9v-xdjh4/proxy/: test (200; 11.319858ms)
Jan  4 14:41:22.946: INFO: (15) /api/v1/namespaces/proxy-4684/services/http:proxy-service-f9g9v:portname1/proxy/: foo (200; 11.631604ms)
Jan  4 14:41:22.946: INFO: (15) /api/v1/namespaces/proxy-4684/services/proxy-service-f9g9v:portname2/proxy/: bar (200; 11.58524ms)
Jan  4 14:41:22.946: INFO: (15) /api/v1/namespaces/proxy-4684/services/http:proxy-service-f9g9v:portname2/proxy/: bar (200; 11.751558ms)
Jan  4 14:41:22.947: INFO: (15) /api/v1/namespaces/proxy-4684/services/proxy-service-f9g9v:portname1/proxy/: foo (200; 12.194767ms)
Jan  4 14:41:22.947: INFO: (15) /api/v1/namespaces/proxy-4684/services/https:proxy-service-f9g9v:tlsportname2/proxy/: tls qux (200; 12.103933ms)
Jan  4 14:41:22.947: INFO: (15) /api/v1/namespaces/proxy-4684/pods/https:proxy-service-f9g9v-xdjh4:462/proxy/: tls qux (200; 12.201595ms)
Jan  4 14:41:22.951: INFO: (15) /api/v1/namespaces/proxy-4684/services/https:proxy-service-f9g9v:tlsportname1/proxy/: tls baz (200; 16.099572ms)
Jan  4 14:41:22.962: INFO: (16) /api/v1/namespaces/proxy-4684/pods/proxy-service-f9g9v-xdjh4:160/proxy/: foo (200; 11.360059ms)
Jan  4 14:41:22.963: INFO: (16) /api/v1/namespaces/proxy-4684/pods/https:proxy-service-f9g9v-xdjh4:462/proxy/: tls qux (200; 11.893878ms)
Jan  4 14:41:22.964: INFO: (16) /api/v1/namespaces/proxy-4684/pods/http:proxy-service-f9g9v-xdjh4:160/proxy/: foo (200; 12.605753ms)
Jan  4 14:41:22.964: INFO: (16) /api/v1/namespaces/proxy-4684/pods/http:proxy-service-f9g9v-xdjh4:1080/proxy/: ... (200; 12.759887ms)
Jan  4 14:41:22.964: INFO: (16) /api/v1/namespaces/proxy-4684/pods/https:proxy-service-f9g9v-xdjh4:443/proxy/: test (200; 13.151031ms)
Jan  4 14:41:22.967: INFO: (16) /api/v1/namespaces/proxy-4684/pods/proxy-service-f9g9v-xdjh4:162/proxy/: bar (200; 15.334575ms)
Jan  4 14:41:22.967: INFO: (16) /api/v1/namespaces/proxy-4684/pods/https:proxy-service-f9g9v-xdjh4:460/proxy/: tls baz (200; 15.743108ms)
Jan  4 14:41:22.967: INFO: (16) /api/v1/namespaces/proxy-4684/services/https:proxy-service-f9g9v:tlsportname1/proxy/: tls baz (200; 15.712125ms)
Jan  4 14:41:22.968: INFO: (16) /api/v1/namespaces/proxy-4684/services/http:proxy-service-f9g9v:portname1/proxy/: foo (200; 16.822273ms)
Jan  4 14:41:22.971: INFO: (16) /api/v1/namespaces/proxy-4684/pods/http:proxy-service-f9g9v-xdjh4:162/proxy/: bar (200; 20.01597ms)
Jan  4 14:41:22.975: INFO: (16) /api/v1/namespaces/proxy-4684/services/https:proxy-service-f9g9v:tlsportname2/proxy/: tls qux (200; 23.29396ms)
Jan  4 14:41:22.975: INFO: (16) /api/v1/namespaces/proxy-4684/pods/proxy-service-f9g9v-xdjh4:1080/proxy/: test<... (200; 24.051183ms)
Jan  4 14:41:22.976: INFO: (16) /api/v1/namespaces/proxy-4684/services/proxy-service-f9g9v:portname1/proxy/: foo (200; 25.148099ms)
Jan  4 14:41:22.978: INFO: (16) /api/v1/namespaces/proxy-4684/services/proxy-service-f9g9v:portname2/proxy/: bar (200; 26.254488ms)
Jan  4 14:41:22.979: INFO: (16) /api/v1/namespaces/proxy-4684/services/http:proxy-service-f9g9v:portname2/proxy/: bar (200; 27.599649ms)
Jan  4 14:41:22.992: INFO: (17) /api/v1/namespaces/proxy-4684/pods/proxy-service-f9g9v-xdjh4:162/proxy/: bar (200; 12.594215ms)
Jan  4 14:41:22.992: INFO: (17) /api/v1/namespaces/proxy-4684/pods/proxy-service-f9g9v-xdjh4:1080/proxy/: test<... (200; 12.567168ms)
Jan  4 14:41:22.995: INFO: (17) /api/v1/namespaces/proxy-4684/pods/http:proxy-service-f9g9v-xdjh4:160/proxy/: foo (200; 16.145884ms)
Jan  4 14:41:22.996: INFO: (17) /api/v1/namespaces/proxy-4684/pods/http:proxy-service-f9g9v-xdjh4:162/proxy/: bar (200; 16.63933ms)
Jan  4 14:41:22.996: INFO: (17) /api/v1/namespaces/proxy-4684/services/http:proxy-service-f9g9v:portname2/proxy/: bar (200; 16.383937ms)
Jan  4 14:41:22.996: INFO: (17) /api/v1/namespaces/proxy-4684/services/https:proxy-service-f9g9v:tlsportname2/proxy/: tls qux (200; 16.650613ms)
Jan  4 14:41:22.997: INFO: (17) /api/v1/namespaces/proxy-4684/pods/https:proxy-service-f9g9v-xdjh4:443/proxy/: test (200; 18.681538ms)
Jan  4 14:41:23.000: INFO: (17) /api/v1/namespaces/proxy-4684/pods/https:proxy-service-f9g9v-xdjh4:462/proxy/: tls qux (200; 20.405504ms)
Jan  4 14:41:23.000: INFO: (17) /api/v1/namespaces/proxy-4684/pods/proxy-service-f9g9v-xdjh4:160/proxy/: foo (200; 20.333072ms)
Jan  4 14:41:23.000: INFO: (17) /api/v1/namespaces/proxy-4684/pods/http:proxy-service-f9g9v-xdjh4:1080/proxy/: ... (200; 20.286139ms)
Jan  4 14:41:23.005: INFO: (18) /api/v1/namespaces/proxy-4684/pods/https:proxy-service-f9g9v-xdjh4:443/proxy/: test (200; 7.499352ms)
Jan  4 14:41:23.007: INFO: (18) /api/v1/namespaces/proxy-4684/pods/https:proxy-service-f9g9v-xdjh4:462/proxy/: tls qux (200; 7.621652ms)
Jan  4 14:41:23.011: INFO: (18) /api/v1/namespaces/proxy-4684/pods/https:proxy-service-f9g9v-xdjh4:460/proxy/: tls baz (200; 10.799605ms)
Jan  4 14:41:23.012: INFO: (18) /api/v1/namespaces/proxy-4684/pods/http:proxy-service-f9g9v-xdjh4:1080/proxy/: ... (200; 11.706157ms)
Jan  4 14:41:23.013: INFO: (18) /api/v1/namespaces/proxy-4684/services/http:proxy-service-f9g9v:portname2/proxy/: bar (200; 12.659446ms)
Jan  4 14:41:23.013: INFO: (18) /api/v1/namespaces/proxy-4684/services/proxy-service-f9g9v:portname2/proxy/: bar (200; 12.672244ms)
Jan  4 14:41:23.014: INFO: (18) /api/v1/namespaces/proxy-4684/pods/http:proxy-service-f9g9v-xdjh4:160/proxy/: foo (200; 13.684756ms)
Jan  4 14:41:23.014: INFO: (18) /api/v1/namespaces/proxy-4684/services/proxy-service-f9g9v:portname1/proxy/: foo (200; 13.794748ms)
Jan  4 14:41:23.015: INFO: (18) /api/v1/namespaces/proxy-4684/pods/proxy-service-f9g9v-xdjh4:1080/proxy/: test<... (200; 14.66309ms)
Jan  4 14:41:23.015: INFO: (18) /api/v1/namespaces/proxy-4684/services/https:proxy-service-f9g9v:tlsportname2/proxy/: tls qux (200; 14.733713ms)
Jan  4 14:41:23.015: INFO: (18) /api/v1/namespaces/proxy-4684/services/http:proxy-service-f9g9v:portname1/proxy/: foo (200; 14.753851ms)
Jan  4 14:41:23.015: INFO: (18) /api/v1/namespaces/proxy-4684/pods/http:proxy-service-f9g9v-xdjh4:162/proxy/: bar (200; 14.798809ms)
Jan  4 14:41:23.019: INFO: (19) /api/v1/namespaces/proxy-4684/pods/https:proxy-service-f9g9v-xdjh4:460/proxy/: tls baz (200; 3.925437ms)
Jan  4 14:41:23.022: INFO: (19) /api/v1/namespaces/proxy-4684/pods/http:proxy-service-f9g9v-xdjh4:160/proxy/: foo (200; 6.832018ms)
Jan  4 14:41:23.022: INFO: (19) /api/v1/namespaces/proxy-4684/pods/proxy-service-f9g9v-xdjh4:1080/proxy/: test<... (200; 7.00801ms)
Jan  4 14:41:23.022: INFO: (19) /api/v1/namespaces/proxy-4684/pods/http:proxy-service-f9g9v-xdjh4:1080/proxy/: ... (200; 7.11095ms)
Jan  4 14:41:23.022: INFO: (19) /api/v1/namespaces/proxy-4684/pods/proxy-service-f9g9v-xdjh4:160/proxy/: foo (200; 7.21339ms)
Jan  4 14:41:23.023: INFO: (19) /api/v1/namespaces/proxy-4684/pods/proxy-service-f9g9v-xdjh4:162/proxy/: bar (200; 8.014553ms)
Jan  4 14:41:23.023: INFO: (19) /api/v1/namespaces/proxy-4684/pods/proxy-service-f9g9v-xdjh4/proxy/: test (200; 8.516097ms)
Jan  4 14:41:23.024: INFO: (19) /api/v1/namespaces/proxy-4684/pods/http:proxy-service-f9g9v-xdjh4:162/proxy/: bar (200; 8.840051ms)
Jan  4 14:41:23.024: INFO: (19) /api/v1/namespaces/proxy-4684/pods/https:proxy-service-f9g9v-xdjh4:443/proxy/: >> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan  4 14:41:43.021: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4922'
Jan  4 14:41:43.706: INFO: stderr: ""
Jan  4 14:41:43.706: INFO: stdout: "replicationcontroller/redis-master created\n"
Jan  4 14:41:43.707: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4922'
Jan  4 14:41:45.483: INFO: stderr: ""
Jan  4 14:41:45.483: INFO: stdout: "service/redis-master created\n"
STEP: Waiting for Redis master to start.
Jan  4 14:41:46.505: INFO: Selector matched 1 pods for map[app:redis]
Jan  4 14:41:46.505: INFO: Found 0 / 1
Jan  4 14:41:47.572: INFO: Selector matched 1 pods for map[app:redis]
Jan  4 14:41:47.572: INFO: Found 0 / 1
Jan  4 14:41:48.503: INFO: Selector matched 1 pods for map[app:redis]
Jan  4 14:41:48.503: INFO: Found 0 / 1
Jan  4 14:41:49.490: INFO: Selector matched 1 pods for map[app:redis]
Jan  4 14:41:49.490: INFO: Found 0 / 1
Jan  4 14:41:50.491: INFO: Selector matched 1 pods for map[app:redis]
Jan  4 14:41:50.491: INFO: Found 0 / 1
Jan  4 14:41:51.492: INFO: Selector matched 1 pods for map[app:redis]
Jan  4 14:41:51.492: INFO: Found 0 / 1
Jan  4 14:41:52.493: INFO: Selector matched 1 pods for map[app:redis]
Jan  4 14:41:52.494: INFO: Found 0 / 1
Jan  4 14:41:53.493: INFO: Selector matched 1 pods for map[app:redis]
Jan  4 14:41:53.493: INFO: Found 0 / 1
Jan  4 14:41:54.495: INFO: Selector matched 1 pods for map[app:redis]
Jan  4 14:41:54.495: INFO: Found 0 / 1
Jan  4 14:41:55.501: INFO: Selector matched 1 pods for map[app:redis]
Jan  4 14:41:55.501: INFO: Found 0 / 1
Jan  4 14:41:56.501: INFO: Selector matched 1 pods for map[app:redis]
Jan  4 14:41:56.502: INFO: Found 1 / 1
Jan  4 14:41:56.502: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Jan  4 14:41:56.511: INFO: Selector matched 1 pods for map[app:redis]
Jan  4 14:41:56.511: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jan  4 14:41:56.511: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-prfkf --namespace=kubectl-4922'
Jan  4 14:41:56.676: INFO: stderr: ""
Jan  4 14:41:56.677: INFO: stdout: "Name:           redis-master-prfkf\nNamespace:      kubectl-4922\nPriority:       0\nNode:           iruya-node/10.96.3.65\nStart Time:     Sat, 04 Jan 2020 14:41:44 +0000\nLabels:         app=redis\n                role=master\nAnnotations:    \nStatus:         Running\nIP:             10.44.0.1\nControlled By:  ReplicationController/redis-master\nContainers:\n  redis-master:\n    Container ID:   docker://c33611fd3c91e13ad6f4c96b2f5efdd302379a1c932b7de51772239b465a4a85\n    Image:          gcr.io/kubernetes-e2e-test-images/redis:1.0\n    Image ID:       docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Sat, 04 Jan 2020 14:41:54 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-585f2 (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-585f2:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-585f2\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason     Age   From                 Message\n  ----    ------     ----  ----                 -------\n  Normal  Scheduled  13s   default-scheduler    Successfully assigned kubectl-4922/redis-master-prfkf to iruya-node\n  Normal  Pulled     6s    kubelet, iruya-node  Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n  Normal  Created    2s    kubelet, iruya-node  Created container redis-master\n  Normal  Started    2s    kubelet, iruya-node  Started container redis-master\n"
Jan  4 14:41:56.677: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=kubectl-4922'
Jan  4 14:41:56.810: INFO: stderr: ""
Jan  4 14:41:56.810: INFO: stdout: "Name:         redis-master\nNamespace:    kubectl-4922\nSelector:     app=redis,role=master\nLabels:       app=redis\n              role=master\nAnnotations:  \nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=redis\n           role=master\n  Containers:\n   redis-master:\n    Image:        gcr.io/kubernetes-e2e-test-images/redis:1.0\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  \n    Mounts:       \n  Volumes:        \nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  13s   replication-controller  Created pod: redis-master-prfkf\n"
Jan  4 14:41:56.811: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=kubectl-4922'
Jan  4 14:41:57.008: INFO: stderr: ""
Jan  4 14:41:57.008: INFO: stdout: "Name:              redis-master\nNamespace:         kubectl-4922\nLabels:            app=redis\n                   role=master\nAnnotations:       \nSelector:          app=redis,role=master\nType:              ClusterIP\nIP:                10.105.67.254\nPort:                6379/TCP\nTargetPort:        redis-server/TCP\nEndpoints:         10.44.0.1:6379\nSession Affinity:  None\nEvents:            \n"
Jan  4 14:41:57.018: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node iruya-node'
Jan  4 14:41:57.117: INFO: stderr: ""
Jan  4 14:41:57.117: INFO: stdout: "Name:               iruya-node\nRoles:              \nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=iruya-node\n                    kubernetes.io/os=linux\nAnnotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Sun, 04 Aug 2019 09:01:39 +0000\nTaints:             \nUnschedulable:      false\nConditions:\n  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----                 ------  -----------------                 ------------------                ------                       -------\n  NetworkUnavailable   False   Sat, 12 Oct 2019 11:56:49 +0000   Sat, 12 Oct 2019 11:56:49 +0000   WeaveIsUp                    Weave pod has set this\n  MemoryPressure       False   Sat, 04 Jan 2020 14:41:54 +0000   Sun, 04 Aug 2019 09:01:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure         False   Sat, 04 Jan 2020 14:41:54 +0000   Sun, 04 Aug 2019 09:01:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure          False   Sat, 04 Jan 2020 14:41:54 +0000   Sun, 04 Aug 2019 09:01:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready                True    Sat, 04 Jan 2020 14:41:54 +0000   Sun, 04 Aug 2019 09:02:19 +0000   KubeletReady                 kubelet is posting ready status. AppArmor enabled\nAddresses:\n  InternalIP:  10.96.3.65\n  Hostname:    iruya-node\nCapacity:\n cpu:                4\n ephemeral-storage:  20145724Ki\n hugepages-2Mi:      0\n memory:             4039076Ki\n pods:               110\nAllocatable:\n cpu:                4\n ephemeral-storage:  18566299208\n hugepages-2Mi:      0\n memory:             3936676Ki\n pods:               110\nSystem Info:\n Machine ID:                 f573dcf04d6f4a87856a35d266a2fa7a\n System UUID:                F573DCF0-4D6F-4A87-856A-35D266A2FA7A\n Boot ID:                    8baf4beb-8391-43e6-b17b-b1e184b5370a\n Kernel Version:             4.15.0-52-generic\n OS Image:                   Ubuntu 18.04.2 LTS\n Operating System:           linux\n Architecture:               amd64\n Container Runtime Version:  docker://18.9.7\n Kubelet Version:            v1.15.1\n Kube-Proxy Version:         v1.15.1\nPodCIDR:                     10.96.1.0/24\nNon-terminated Pods:         (3 in total)\n  Namespace                  Name                  CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                  ----                  ------------  ----------  ---------------  -------------  ---\n  kube-system                kube-proxy-976zl      0 (0%)        0 (0%)      0 (0%)           0 (0%)         153d\n  kube-system                weave-net-rlp57       20m (0%)      0 (0%)      0 (0%)           0 (0%)         84d\n  kubectl-4922               redis-master-prfkf    0 (0%)        0 (0%)      0 (0%)           0 (0%)         14s\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests  Limits\n  --------           --------  ------\n  cpu                20m (0%)  0 (0%)\n  memory             0 (0%)    0 (0%)\n  ephemeral-storage  0 (0%)    0 (0%)\nEvents:              \n"
Jan  4 14:41:57.118: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-4922'
Jan  4 14:41:57.222: INFO: stderr: ""
Jan  4 14:41:57.222: INFO: stdout: "Name:         kubectl-4922\nLabels:       e2e-framework=kubectl\n              e2e-run=c349832e-9863-4cd2-b91e-ce17260d342c\nAnnotations:  \nStatus:       Active\n\nNo resource quota.\n\nNo resource limits.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 14:41:57.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4922" for this suite.
Jan  4 14:42:31.288: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 14:42:31.408: INFO: namespace kubectl-4922 deletion completed in 34.181447701s

• [SLOW TEST:48.558 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl describe
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if kubectl describe prints relevant information for rc and pods  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 14:42:31.409: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-fba4849d-a486-43fc-b667-cc367e3c298a
STEP: Creating a pod to test consume secrets
Jan  4 14:42:31.618: INFO: Waiting up to 5m0s for pod "pod-secrets-f5aec05f-710a-4d24-a8ac-408912bd410d" in namespace "secrets-2010" to be "success or failure"
Jan  4 14:42:31.628: INFO: Pod "pod-secrets-f5aec05f-710a-4d24-a8ac-408912bd410d": Phase="Pending", Reason="", readiness=false. Elapsed: 9.800233ms
Jan  4 14:42:34.041: INFO: Pod "pod-secrets-f5aec05f-710a-4d24-a8ac-408912bd410d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.422589824s
Jan  4 14:42:36.056: INFO: Pod "pod-secrets-f5aec05f-710a-4d24-a8ac-408912bd410d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.438060299s
Jan  4 14:42:38.068: INFO: Pod "pod-secrets-f5aec05f-710a-4d24-a8ac-408912bd410d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.449280705s
Jan  4 14:42:40.110: INFO: Pod "pod-secrets-f5aec05f-710a-4d24-a8ac-408912bd410d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.491288889s
Jan  4 14:42:42.117: INFO: Pod "pod-secrets-f5aec05f-710a-4d24-a8ac-408912bd410d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.498801978s
STEP: Saw pod success
Jan  4 14:42:42.117: INFO: Pod "pod-secrets-f5aec05f-710a-4d24-a8ac-408912bd410d" satisfied condition "success or failure"
Jan  4 14:42:42.141: INFO: Trying to get logs from node iruya-node pod pod-secrets-f5aec05f-710a-4d24-a8ac-408912bd410d container secret-volume-test: 
STEP: delete the pod
Jan  4 14:42:42.207: INFO: Waiting for pod pod-secrets-f5aec05f-710a-4d24-a8ac-408912bd410d to disappear
Jan  4 14:42:42.217: INFO: Pod pod-secrets-f5aec05f-710a-4d24-a8ac-408912bd410d no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 14:42:42.217: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2010" for this suite.
Jan  4 14:42:48.299: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 14:42:48.449: INFO: namespace secrets-2010 deletion completed in 6.172282304s

• [SLOW TEST:17.040 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 14:42:48.450: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-bde2c6f5-3084-4d76-a17b-4b651222e2ff
STEP: Creating a pod to test consume configMaps
Jan  4 14:42:48.578: INFO: Waiting up to 5m0s for pod "pod-configmaps-9b6d7dfd-4c31-4915-aa5d-d4a83c838a60" in namespace "configmap-9580" to be "success or failure"
Jan  4 14:42:48.591: INFO: Pod "pod-configmaps-9b6d7dfd-4c31-4915-aa5d-d4a83c838a60": Phase="Pending", Reason="", readiness=false. Elapsed: 12.576258ms
Jan  4 14:42:50.628: INFO: Pod "pod-configmaps-9b6d7dfd-4c31-4915-aa5d-d4a83c838a60": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049932523s
Jan  4 14:42:52.637: INFO: Pod "pod-configmaps-9b6d7dfd-4c31-4915-aa5d-d4a83c838a60": Phase="Pending", Reason="", readiness=false. Elapsed: 4.058106101s
Jan  4 14:42:54.671: INFO: Pod "pod-configmaps-9b6d7dfd-4c31-4915-aa5d-d4a83c838a60": Phase="Pending", Reason="", readiness=false. Elapsed: 6.092800576s
Jan  4 14:42:56.687: INFO: Pod "pod-configmaps-9b6d7dfd-4c31-4915-aa5d-d4a83c838a60": Phase="Pending", Reason="", readiness=false. Elapsed: 8.108392706s
Jan  4 14:42:58.725: INFO: Pod "pod-configmaps-9b6d7dfd-4c31-4915-aa5d-d4a83c838a60": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.146252564s
STEP: Saw pod success
Jan  4 14:42:58.725: INFO: Pod "pod-configmaps-9b6d7dfd-4c31-4915-aa5d-d4a83c838a60" satisfied condition "success or failure"
Jan  4 14:42:58.729: INFO: Trying to get logs from node iruya-node pod pod-configmaps-9b6d7dfd-4c31-4915-aa5d-d4a83c838a60 container configmap-volume-test: 
STEP: delete the pod
Jan  4 14:42:59.374: INFO: Waiting for pod pod-configmaps-9b6d7dfd-4c31-4915-aa5d-d4a83c838a60 to disappear
Jan  4 14:42:59.382: INFO: Pod pod-configmaps-9b6d7dfd-4c31-4915-aa5d-d4a83c838a60 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 14:42:59.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9580" for this suite.
Jan  4 14:43:05.415: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 14:43:05.553: INFO: namespace configmap-9580 deletion completed in 6.164214018s

• [SLOW TEST:17.103 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 14:43:05.553: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan  4 14:43:05.696: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"4c84d166-c0e4-46ea-bc6b-c4d79c74549e", Controller:(*bool)(0xc0030f5492), BlockOwnerDeletion:(*bool)(0xc0030f5493)}}
Jan  4 14:43:05.713: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"6b786ed7-23da-462b-9406-35b212b7f913", Controller:(*bool)(0xc0029ded2a), BlockOwnerDeletion:(*bool)(0xc0029ded2b)}}
Jan  4 14:43:05.734: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"6eae3f05-65a1-42c2-881b-120ed4d6dbbf", Controller:(*bool)(0xc0030f564a), BlockOwnerDeletion:(*bool)(0xc0030f564b)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 14:43:10.794: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-7291" for this suite.
Jan  4 14:43:16.855: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 14:43:16.983: INFO: namespace gc-7291 deletion completed in 6.183341376s

• [SLOW TEST:11.429 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 14:43:16.984: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs
STEP: Gathering metrics
W0104 14:43:47.237579       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan  4 14:43:47.238: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 14:43:47.238: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-2701" for this suite.
Jan  4 14:43:53.834: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 14:43:54.808: INFO: namespace gc-2701 deletion completed in 7.553252499s

• [SLOW TEST:37.825 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 14:43:54.810: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override all
Jan  4 14:43:55.032: INFO: Waiting up to 5m0s for pod "client-containers-4b8ab080-127a-435b-917c-f8281f5beabc" in namespace "containers-4004" to be "success or failure"
Jan  4 14:43:55.129: INFO: Pod "client-containers-4b8ab080-127a-435b-917c-f8281f5beabc": Phase="Pending", Reason="", readiness=false. Elapsed: 97.202928ms
Jan  4 14:43:57.139: INFO: Pod "client-containers-4b8ab080-127a-435b-917c-f8281f5beabc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.106464603s
Jan  4 14:43:59.157: INFO: Pod "client-containers-4b8ab080-127a-435b-917c-f8281f5beabc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.125097814s
Jan  4 14:44:01.168: INFO: Pod "client-containers-4b8ab080-127a-435b-917c-f8281f5beabc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.135800218s
Jan  4 14:44:03.179: INFO: Pod "client-containers-4b8ab080-127a-435b-917c-f8281f5beabc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.14672979s
Jan  4 14:44:05.187: INFO: Pod "client-containers-4b8ab080-127a-435b-917c-f8281f5beabc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.15530017s
STEP: Saw pod success
Jan  4 14:44:05.187: INFO: Pod "client-containers-4b8ab080-127a-435b-917c-f8281f5beabc" satisfied condition "success or failure"
Jan  4 14:44:05.190: INFO: Trying to get logs from node iruya-node pod client-containers-4b8ab080-127a-435b-917c-f8281f5beabc container test-container: 
STEP: delete the pod
Jan  4 14:44:05.246: INFO: Waiting for pod client-containers-4b8ab080-127a-435b-917c-f8281f5beabc to disappear
Jan  4 14:44:05.260: INFO: Pod client-containers-4b8ab080-127a-435b-917c-f8281f5beabc no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 14:44:05.260: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-4004" for this suite.
Jan  4 14:44:11.285: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 14:44:11.448: INFO: namespace containers-4004 deletion completed in 6.183848892s

• [SLOW TEST:16.638 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 14:44:11.449: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan  4 14:44:11.561: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bd570735-1c0b-4b60-9374-93e37e0ec3a1" in namespace "downward-api-9262" to be "success or failure"
Jan  4 14:44:11.580: INFO: Pod "downwardapi-volume-bd570735-1c0b-4b60-9374-93e37e0ec3a1": Phase="Pending", Reason="", readiness=false. Elapsed: 18.582062ms
Jan  4 14:44:13.591: INFO: Pod "downwardapi-volume-bd570735-1c0b-4b60-9374-93e37e0ec3a1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029318272s
Jan  4 14:44:15.600: INFO: Pod "downwardapi-volume-bd570735-1c0b-4b60-9374-93e37e0ec3a1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038296651s
Jan  4 14:44:17.916: INFO: Pod "downwardapi-volume-bd570735-1c0b-4b60-9374-93e37e0ec3a1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.354767247s
Jan  4 14:44:19.925: INFO: Pod "downwardapi-volume-bd570735-1c0b-4b60-9374-93e37e0ec3a1": Phase="Pending", Reason="", readiness=false. Elapsed: 8.363641884s
Jan  4 14:44:21.943: INFO: Pod "downwardapi-volume-bd570735-1c0b-4b60-9374-93e37e0ec3a1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.3814328s
STEP: Saw pod success
Jan  4 14:44:21.943: INFO: Pod "downwardapi-volume-bd570735-1c0b-4b60-9374-93e37e0ec3a1" satisfied condition "success or failure"
Jan  4 14:44:21.948: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-bd570735-1c0b-4b60-9374-93e37e0ec3a1 container client-container: 
STEP: delete the pod
Jan  4 14:44:22.123: INFO: Waiting for pod downwardapi-volume-bd570735-1c0b-4b60-9374-93e37e0ec3a1 to disappear
Jan  4 14:44:22.129: INFO: Pod downwardapi-volume-bd570735-1c0b-4b60-9374-93e37e0ec3a1 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 14:44:22.129: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9262" for this suite.
Jan  4 14:44:28.280: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 14:44:28.411: INFO: namespace downward-api-9262 deletion completed in 6.271988082s

• [SLOW TEST:16.962 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 14:44:28.411: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test use defaults
Jan  4 14:44:28.516: INFO: Waiting up to 5m0s for pod "client-containers-a8a9dc49-b65b-4713-8c1b-32295398882c" in namespace "containers-2938" to be "success or failure"
Jan  4 14:44:28.523: INFO: Pod "client-containers-a8a9dc49-b65b-4713-8c1b-32295398882c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.864288ms
Jan  4 14:44:30.534: INFO: Pod "client-containers-a8a9dc49-b65b-4713-8c1b-32295398882c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017689169s
Jan  4 14:44:32.552: INFO: Pod "client-containers-a8a9dc49-b65b-4713-8c1b-32295398882c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035966011s
Jan  4 14:44:34.562: INFO: Pod "client-containers-a8a9dc49-b65b-4713-8c1b-32295398882c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.046259203s
Jan  4 14:44:36.575: INFO: Pod "client-containers-a8a9dc49-b65b-4713-8c1b-32295398882c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.059235832s
STEP: Saw pod success
Jan  4 14:44:36.575: INFO: Pod "client-containers-a8a9dc49-b65b-4713-8c1b-32295398882c" satisfied condition "success or failure"
Jan  4 14:44:36.590: INFO: Trying to get logs from node iruya-node pod client-containers-a8a9dc49-b65b-4713-8c1b-32295398882c container test-container: 
STEP: delete the pod
Jan  4 14:44:36.722: INFO: Waiting for pod client-containers-a8a9dc49-b65b-4713-8c1b-32295398882c to disappear
Jan  4 14:44:36.728: INFO: Pod client-containers-a8a9dc49-b65b-4713-8c1b-32295398882c no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 14:44:36.728: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-2938" for this suite.
Jan  4 14:44:42.834: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 14:44:42.929: INFO: namespace containers-2938 deletion completed in 6.196964385s

• [SLOW TEST:14.518 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 14:44:42.929: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Jan  4 14:44:43.195: INFO: Waiting up to 5m0s for pod "downward-api-7c827e15-214a-4569-ac1e-855b0b897c53" in namespace "downward-api-533" to be "success or failure"
Jan  4 14:44:43.201: INFO: Pod "downward-api-7c827e15-214a-4569-ac1e-855b0b897c53": Phase="Pending", Reason="", readiness=false. Elapsed: 5.961426ms
Jan  4 14:44:45.211: INFO: Pod "downward-api-7c827e15-214a-4569-ac1e-855b0b897c53": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016077438s
Jan  4 14:44:47.219: INFO: Pod "downward-api-7c827e15-214a-4569-ac1e-855b0b897c53": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023906905s
Jan  4 14:44:49.451: INFO: Pod "downward-api-7c827e15-214a-4569-ac1e-855b0b897c53": Phase="Pending", Reason="", readiness=false. Elapsed: 6.256282158s
Jan  4 14:44:51.459: INFO: Pod "downward-api-7c827e15-214a-4569-ac1e-855b0b897c53": Phase="Pending", Reason="", readiness=false. Elapsed: 8.264469005s
Jan  4 14:44:53.470: INFO: Pod "downward-api-7c827e15-214a-4569-ac1e-855b0b897c53": Phase="Pending", Reason="", readiness=false. Elapsed: 10.274723211s
Jan  4 14:44:55.477: INFO: Pod "downward-api-7c827e15-214a-4569-ac1e-855b0b897c53": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.281829739s
STEP: Saw pod success
Jan  4 14:44:55.477: INFO: Pod "downward-api-7c827e15-214a-4569-ac1e-855b0b897c53" satisfied condition "success or failure"
Jan  4 14:44:55.482: INFO: Trying to get logs from node iruya-node pod downward-api-7c827e15-214a-4569-ac1e-855b0b897c53 container dapi-container: 
STEP: delete the pod
Jan  4 14:44:55.607: INFO: Waiting for pod downward-api-7c827e15-214a-4569-ac1e-855b0b897c53 to disappear
Jan  4 14:44:55.631: INFO: Pod downward-api-7c827e15-214a-4569-ac1e-855b0b897c53 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 14:44:55.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-533" for this suite.
Jan  4 14:45:01.699: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 14:45:01.930: INFO: namespace downward-api-533 deletion completed in 6.287349911s

• [SLOW TEST:19.001 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 14:45:01.931: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-d79766e5-cd19-4fde-91fb-b654f5e9fa92
STEP: Creating a pod to test consume configMaps
Jan  4 14:45:02.083: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-146a9efe-5d6e-4386-823a-6dd3820f2e96" in namespace "projected-6066" to be "success or failure"
Jan  4 14:45:02.087: INFO: Pod "pod-projected-configmaps-146a9efe-5d6e-4386-823a-6dd3820f2e96": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011896ms
Jan  4 14:45:04.094: INFO: Pod "pod-projected-configmaps-146a9efe-5d6e-4386-823a-6dd3820f2e96": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01071541s
Jan  4 14:45:06.105: INFO: Pod "pod-projected-configmaps-146a9efe-5d6e-4386-823a-6dd3820f2e96": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02223827s
Jan  4 14:45:08.113: INFO: Pod "pod-projected-configmaps-146a9efe-5d6e-4386-823a-6dd3820f2e96": Phase="Pending", Reason="", readiness=false. Elapsed: 6.029606742s
Jan  4 14:45:10.168: INFO: Pod "pod-projected-configmaps-146a9efe-5d6e-4386-823a-6dd3820f2e96": Phase="Pending", Reason="", readiness=false. Elapsed: 8.085233314s
Jan  4 14:45:12.185: INFO: Pod "pod-projected-configmaps-146a9efe-5d6e-4386-823a-6dd3820f2e96": Phase="Running", Reason="", readiness=true. Elapsed: 10.102349371s
Jan  4 14:45:14.192: INFO: Pod "pod-projected-configmaps-146a9efe-5d6e-4386-823a-6dd3820f2e96": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.109081525s
STEP: Saw pod success
Jan  4 14:45:14.192: INFO: Pod "pod-projected-configmaps-146a9efe-5d6e-4386-823a-6dd3820f2e96" satisfied condition "success or failure"
Jan  4 14:45:14.197: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-146a9efe-5d6e-4386-823a-6dd3820f2e96 container projected-configmap-volume-test: 
STEP: delete the pod
Jan  4 14:45:14.296: INFO: Waiting for pod pod-projected-configmaps-146a9efe-5d6e-4386-823a-6dd3820f2e96 to disappear
Jan  4 14:45:14.325: INFO: Pod pod-projected-configmaps-146a9efe-5d6e-4386-823a-6dd3820f2e96 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 14:45:14.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6066" for this suite.
Jan  4 14:45:20.379: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 14:45:20.589: INFO: namespace projected-6066 deletion completed in 6.237067121s

• [SLOW TEST:18.659 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 14:45:20.591: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan  4 14:45:20.765: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Jan  4 14:45:20.775: INFO: Number of nodes with available pods: 0
Jan  4 14:45:20.775: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Jan  4 14:45:20.882: INFO: Number of nodes with available pods: 0
Jan  4 14:45:20.882: INFO: Node iruya-node is running more than one daemon pod
Jan  4 14:45:21.895: INFO: Number of nodes with available pods: 0
Jan  4 14:45:21.895: INFO: Node iruya-node is running more than one daemon pod
Jan  4 14:45:22.892: INFO: Number of nodes with available pods: 0
Jan  4 14:45:22.892: INFO: Node iruya-node is running more than one daemon pod
Jan  4 14:45:23.898: INFO: Number of nodes with available pods: 0
Jan  4 14:45:23.898: INFO: Node iruya-node is running more than one daemon pod
Jan  4 14:45:24.902: INFO: Number of nodes with available pods: 0
Jan  4 14:45:24.902: INFO: Node iruya-node is running more than one daemon pod
Jan  4 14:45:25.895: INFO: Number of nodes with available pods: 0
Jan  4 14:45:25.895: INFO: Node iruya-node is running more than one daemon pod
Jan  4 14:45:26.890: INFO: Number of nodes with available pods: 0
Jan  4 14:45:26.890: INFO: Node iruya-node is running more than one daemon pod
Jan  4 14:45:27.908: INFO: Number of nodes with available pods: 0
Jan  4 14:45:27.909: INFO: Node iruya-node is running more than one daemon pod
Jan  4 14:45:28.892: INFO: Number of nodes with available pods: 1
Jan  4 14:45:28.892: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Jan  4 14:45:28.938: INFO: Number of nodes with available pods: 1
Jan  4 14:45:28.938: INFO: Number of running nodes: 0, number of available pods: 1
Jan  4 14:45:29.949: INFO: Number of nodes with available pods: 0
Jan  4 14:45:29.949: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Jan  4 14:45:29.970: INFO: Number of nodes with available pods: 0
Jan  4 14:45:29.970: INFO: Node iruya-node is running more than one daemon pod
Jan  4 14:45:30.978: INFO: Number of nodes with available pods: 0
Jan  4 14:45:30.978: INFO: Node iruya-node is running more than one daemon pod
Jan  4 14:45:31.977: INFO: Number of nodes with available pods: 0
Jan  4 14:45:31.977: INFO: Node iruya-node is running more than one daemon pod
Jan  4 14:45:32.976: INFO: Number of nodes with available pods: 0
Jan  4 14:45:32.976: INFO: Node iruya-node is running more than one daemon pod
Jan  4 14:45:33.980: INFO: Number of nodes with available pods: 0
Jan  4 14:45:33.980: INFO: Node iruya-node is running more than one daemon pod
Jan  4 14:45:34.993: INFO: Number of nodes with available pods: 0
Jan  4 14:45:34.993: INFO: Node iruya-node is running more than one daemon pod
Jan  4 14:45:35.977: INFO: Number of nodes with available pods: 0
Jan  4 14:45:35.977: INFO: Node iruya-node is running more than one daemon pod
Jan  4 14:45:36.979: INFO: Number of nodes with available pods: 0
Jan  4 14:45:36.979: INFO: Node iruya-node is running more than one daemon pod
Jan  4 14:45:37.979: INFO: Number of nodes with available pods: 0
Jan  4 14:45:37.979: INFO: Node iruya-node is running more than one daemon pod
Jan  4 14:45:38.977: INFO: Number of nodes with available pods: 0
Jan  4 14:45:38.977: INFO: Node iruya-node is running more than one daemon pod
Jan  4 14:45:39.995: INFO: Number of nodes with available pods: 0
Jan  4 14:45:39.995: INFO: Node iruya-node is running more than one daemon pod
Jan  4 14:45:40.983: INFO: Number of nodes with available pods: 0
Jan  4 14:45:40.983: INFO: Node iruya-node is running more than one daemon pod
Jan  4 14:45:41.982: INFO: Number of nodes with available pods: 0
Jan  4 14:45:41.982: INFO: Node iruya-node is running more than one daemon pod
Jan  4 14:45:42.978: INFO: Number of nodes with available pods: 0
Jan  4 14:45:42.979: INFO: Node iruya-node is running more than one daemon pod
Jan  4 14:45:43.982: INFO: Number of nodes with available pods: 0
Jan  4 14:45:43.982: INFO: Node iruya-node is running more than one daemon pod
Jan  4 14:45:44.981: INFO: Number of nodes with available pods: 0
Jan  4 14:45:44.981: INFO: Node iruya-node is running more than one daemon pod
Jan  4 14:45:45.979: INFO: Number of nodes with available pods: 0
Jan  4 14:45:45.979: INFO: Node iruya-node is running more than one daemon pod
Jan  4 14:45:46.981: INFO: Number of nodes with available pods: 0
Jan  4 14:45:46.981: INFO: Node iruya-node is running more than one daemon pod
Jan  4 14:45:47.986: INFO: Number of nodes with available pods: 0
Jan  4 14:45:47.986: INFO: Node iruya-node is running more than one daemon pod
Jan  4 14:45:48.981: INFO: Number of nodes with available pods: 0
Jan  4 14:45:48.981: INFO: Node iruya-node is running more than one daemon pod
Jan  4 14:45:49.988: INFO: Number of nodes with available pods: 0
Jan  4 14:45:49.988: INFO: Node iruya-node is running more than one daemon pod
Jan  4 14:45:50.981: INFO: Number of nodes with available pods: 0
Jan  4 14:45:50.982: INFO: Node iruya-node is running more than one daemon pod
Jan  4 14:45:52.100: INFO: Number of nodes with available pods: 0
Jan  4 14:45:52.100: INFO: Node iruya-node is running more than one daemon pod
Jan  4 14:45:52.978: INFO: Number of nodes with available pods: 0
Jan  4 14:45:52.978: INFO: Node iruya-node is running more than one daemon pod
Jan  4 14:45:53.981: INFO: Number of nodes with available pods: 0
Jan  4 14:45:53.981: INFO: Node iruya-node is running more than one daemon pod
Jan  4 14:45:54.980: INFO: Number of nodes with available pods: 0
Jan  4 14:45:54.980: INFO: Node iruya-node is running more than one daemon pod
Jan  4 14:45:55.980: INFO: Number of nodes with available pods: 0
Jan  4 14:45:55.980: INFO: Node iruya-node is running more than one daemon pod
Jan  4 14:45:56.977: INFO: Number of nodes with available pods: 1
Jan  4 14:45:56.977: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7384, will wait for the garbage collector to delete the pods
Jan  4 14:45:57.047: INFO: Deleting DaemonSet.extensions daemon-set took: 9.040864ms
Jan  4 14:45:57.347: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.383025ms
Jan  4 14:46:06.553: INFO: Number of nodes with available pods: 0
Jan  4 14:46:06.553: INFO: Number of running nodes: 0, number of available pods: 0
Jan  4 14:46:06.556: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7384/daemonsets","resourceVersion":"19280100"},"items":null}

Jan  4 14:46:06.559: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7384/pods","resourceVersion":"19280100"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 14:46:06.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-7384" for this suite.
Jan  4 14:46:12.659: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 14:46:12.779: INFO: namespace daemonsets-7384 deletion completed in 6.148524516s

• [SLOW TEST:52.188 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 14:46:12.779: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Jan  4 14:46:33.953: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan  4 14:46:34.030: INFO: Pod pod-with-poststart-http-hook still exists
Jan  4 14:46:36.031: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan  4 14:46:36.042: INFO: Pod pod-with-poststart-http-hook still exists
Jan  4 14:46:38.031: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan  4 14:46:38.043: INFO: Pod pod-with-poststart-http-hook still exists
Jan  4 14:46:40.031: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan  4 14:46:40.038: INFO: Pod pod-with-poststart-http-hook still exists
Jan  4 14:46:42.031: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan  4 14:46:42.045: INFO: Pod pod-with-poststart-http-hook still exists
Jan  4 14:46:44.031: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan  4 14:46:44.045: INFO: Pod pod-with-poststart-http-hook still exists
Jan  4 14:46:46.031: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan  4 14:46:46.043: INFO: Pod pod-with-poststart-http-hook still exists
Jan  4 14:46:48.031: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan  4 14:46:48.045: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 14:46:48.045: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-4082" for this suite.
Jan  4 14:47:12.082: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 14:47:12.245: INFO: namespace container-lifecycle-hook-4082 deletion completed in 24.19301584s

• [SLOW TEST:59.466 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 14:47:12.246: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on tmpfs
Jan  4 14:47:12.418: INFO: Waiting up to 5m0s for pod "pod-ac4ab066-09dd-4cd9-9bf6-7aa191424580" in namespace "emptydir-7880" to be "success or failure"
Jan  4 14:47:12.552: INFO: Pod "pod-ac4ab066-09dd-4cd9-9bf6-7aa191424580": Phase="Pending", Reason="", readiness=false. Elapsed: 134.094119ms
Jan  4 14:47:14.567: INFO: Pod "pod-ac4ab066-09dd-4cd9-9bf6-7aa191424580": Phase="Pending", Reason="", readiness=false. Elapsed: 2.14863956s
Jan  4 14:47:16.575: INFO: Pod "pod-ac4ab066-09dd-4cd9-9bf6-7aa191424580": Phase="Pending", Reason="", readiness=false. Elapsed: 4.156735764s
Jan  4 14:47:18.597: INFO: Pod "pod-ac4ab066-09dd-4cd9-9bf6-7aa191424580": Phase="Pending", Reason="", readiness=false. Elapsed: 6.178676638s
Jan  4 14:47:20.606: INFO: Pod "pod-ac4ab066-09dd-4cd9-9bf6-7aa191424580": Phase="Pending", Reason="", readiness=false. Elapsed: 8.187990334s
Jan  4 14:47:22.627: INFO: Pod "pod-ac4ab066-09dd-4cd9-9bf6-7aa191424580": Phase="Pending", Reason="", readiness=false. Elapsed: 10.209107678s
Jan  4 14:47:24.642: INFO: Pod "pod-ac4ab066-09dd-4cd9-9bf6-7aa191424580": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.223485134s
STEP: Saw pod success
Jan  4 14:47:24.642: INFO: Pod "pod-ac4ab066-09dd-4cd9-9bf6-7aa191424580" satisfied condition "success or failure"
Jan  4 14:47:24.655: INFO: Trying to get logs from node iruya-node pod pod-ac4ab066-09dd-4cd9-9bf6-7aa191424580 container test-container: 
STEP: delete the pod
Jan  4 14:47:24.801: INFO: Waiting for pod pod-ac4ab066-09dd-4cd9-9bf6-7aa191424580 to disappear
Jan  4 14:47:24.810: INFO: Pod pod-ac4ab066-09dd-4cd9-9bf6-7aa191424580 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 14:47:24.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7880" for this suite.
Jan  4 14:47:30.982: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 14:47:31.109: INFO: namespace emptydir-7880 deletion completed in 6.282227839s

• [SLOW TEST:18.864 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 14:47:31.110: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-2e359e47-1d73-41cf-86ba-638e6ec4a88d
STEP: Creating a pod to test consume configMaps
Jan  4 14:47:31.355: INFO: Waiting up to 5m0s for pod "pod-configmaps-742db47d-003d-4bf3-a882-fb68810b7278" in namespace "configmap-4704" to be "success or failure"
Jan  4 14:47:31.386: INFO: Pod "pod-configmaps-742db47d-003d-4bf3-a882-fb68810b7278": Phase="Pending", Reason="", readiness=false. Elapsed: 29.911475ms
Jan  4 14:47:33.399: INFO: Pod "pod-configmaps-742db47d-003d-4bf3-a882-fb68810b7278": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043299262s
Jan  4 14:47:35.420: INFO: Pod "pod-configmaps-742db47d-003d-4bf3-a882-fb68810b7278": Phase="Pending", Reason="", readiness=false. Elapsed: 4.064068385s
Jan  4 14:47:37.437: INFO: Pod "pod-configmaps-742db47d-003d-4bf3-a882-fb68810b7278": Phase="Pending", Reason="", readiness=false. Elapsed: 6.081023872s
Jan  4 14:47:39.448: INFO: Pod "pod-configmaps-742db47d-003d-4bf3-a882-fb68810b7278": Phase="Pending", Reason="", readiness=false. Elapsed: 8.0919369s
Jan  4 14:47:41.459: INFO: Pod "pod-configmaps-742db47d-003d-4bf3-a882-fb68810b7278": Phase="Running", Reason="", readiness=true. Elapsed: 10.102549473s
Jan  4 14:47:43.469: INFO: Pod "pod-configmaps-742db47d-003d-4bf3-a882-fb68810b7278": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.112855909s
STEP: Saw pod success
Jan  4 14:47:43.469: INFO: Pod "pod-configmaps-742db47d-003d-4bf3-a882-fb68810b7278" satisfied condition "success or failure"
Jan  4 14:47:43.472: INFO: Trying to get logs from node iruya-node pod pod-configmaps-742db47d-003d-4bf3-a882-fb68810b7278 container configmap-volume-test: 
STEP: delete the pod
Jan  4 14:47:43.728: INFO: Waiting for pod pod-configmaps-742db47d-003d-4bf3-a882-fb68810b7278 to disappear
Jan  4 14:47:43.750: INFO: Pod pod-configmaps-742db47d-003d-4bf3-a882-fb68810b7278 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 14:47:43.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4704" for this suite.
Jan  4 14:47:49.815: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 14:47:49.956: INFO: namespace configmap-4704 deletion completed in 6.195214959s

• [SLOW TEST:18.847 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 14:47:49.957: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-9b4bdf47-c544-4eca-94c1-fa544b5b0576
STEP: Creating a pod to test consume secrets
Jan  4 14:47:50.047: INFO: Waiting up to 5m0s for pod "pod-secrets-1094db42-d7c6-4092-99fb-628c776e2d6b" in namespace "secrets-2448" to be "success or failure"
Jan  4 14:47:50.055: INFO: Pod "pod-secrets-1094db42-d7c6-4092-99fb-628c776e2d6b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.261149ms
Jan  4 14:47:52.070: INFO: Pod "pod-secrets-1094db42-d7c6-4092-99fb-628c776e2d6b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023121046s
Jan  4 14:47:54.082: INFO: Pod "pod-secrets-1094db42-d7c6-4092-99fb-628c776e2d6b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03547798s
Jan  4 14:47:56.094: INFO: Pod "pod-secrets-1094db42-d7c6-4092-99fb-628c776e2d6b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.047592508s
Jan  4 14:47:58.139: INFO: Pod "pod-secrets-1094db42-d7c6-4092-99fb-628c776e2d6b": Phase="Running", Reason="", readiness=true. Elapsed: 8.092209957s
Jan  4 14:48:00.147: INFO: Pod "pod-secrets-1094db42-d7c6-4092-99fb-628c776e2d6b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.100147646s
STEP: Saw pod success
Jan  4 14:48:00.147: INFO: Pod "pod-secrets-1094db42-d7c6-4092-99fb-628c776e2d6b" satisfied condition "success or failure"
Jan  4 14:48:00.151: INFO: Trying to get logs from node iruya-node pod pod-secrets-1094db42-d7c6-4092-99fb-628c776e2d6b container secret-volume-test: 
STEP: delete the pod
Jan  4 14:48:00.288: INFO: Waiting for pod pod-secrets-1094db42-d7c6-4092-99fb-628c776e2d6b to disappear
Jan  4 14:48:00.298: INFO: Pod pod-secrets-1094db42-d7c6-4092-99fb-628c776e2d6b no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 14:48:00.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2448" for this suite.
Jan  4 14:48:06.337: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 14:48:06.447: INFO: namespace secrets-2448 deletion completed in 6.137274566s

• [SLOW TEST:16.490 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 14:48:06.448: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name projected-secret-test-00184c7a-7ccb-40d1-9936-7b7daafd8b2c
STEP: Creating a pod to test consume secrets
Jan  4 14:48:06.574: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-a78032de-02d8-42f4-9544-ee8fabb5cd48" in namespace "projected-7655" to be "success or failure"
Jan  4 14:48:06.606: INFO: Pod "pod-projected-secrets-a78032de-02d8-42f4-9544-ee8fabb5cd48": Phase="Pending", Reason="", readiness=false. Elapsed: 31.631438ms
Jan  4 14:48:08.622: INFO: Pod "pod-projected-secrets-a78032de-02d8-42f4-9544-ee8fabb5cd48": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048200231s
Jan  4 14:48:10.630: INFO: Pod "pod-projected-secrets-a78032de-02d8-42f4-9544-ee8fabb5cd48": Phase="Pending", Reason="", readiness=false. Elapsed: 4.05576512s
Jan  4 14:48:12.643: INFO: Pod "pod-projected-secrets-a78032de-02d8-42f4-9544-ee8fabb5cd48": Phase="Pending", Reason="", readiness=false. Elapsed: 6.068571492s
Jan  4 14:48:14.650: INFO: Pod "pod-projected-secrets-a78032de-02d8-42f4-9544-ee8fabb5cd48": Phase="Pending", Reason="", readiness=false. Elapsed: 8.075518612s
Jan  4 14:48:16.658: INFO: Pod "pod-projected-secrets-a78032de-02d8-42f4-9544-ee8fabb5cd48": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.084242788s
STEP: Saw pod success
Jan  4 14:48:16.659: INFO: Pod "pod-projected-secrets-a78032de-02d8-42f4-9544-ee8fabb5cd48" satisfied condition "success or failure"
Jan  4 14:48:16.661: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-a78032de-02d8-42f4-9544-ee8fabb5cd48 container secret-volume-test: 
STEP: delete the pod
Jan  4 14:48:16.760: INFO: Waiting for pod pod-projected-secrets-a78032de-02d8-42f4-9544-ee8fabb5cd48 to disappear
Jan  4 14:48:16.850: INFO: Pod pod-projected-secrets-a78032de-02d8-42f4-9544-ee8fabb5cd48 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 14:48:16.850: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7655" for this suite.
Jan  4 14:48:22.890: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 14:48:23.017: INFO: namespace projected-7655 deletion completed in 6.155321381s

• [SLOW TEST:16.569 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 14:48:23.018: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-8d049f3c-2067-4550-a18e-3a64c2edd138
STEP: Creating a pod to test consume configMaps
Jan  4 14:48:23.251: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b8c346c9-6295-4f0d-9874-51391de734b7" in namespace "projected-9626" to be "success or failure"
Jan  4 14:48:23.273: INFO: Pod "pod-projected-configmaps-b8c346c9-6295-4f0d-9874-51391de734b7": Phase="Pending", Reason="", readiness=false. Elapsed: 22.358726ms
Jan  4 14:48:25.282: INFO: Pod "pod-projected-configmaps-b8c346c9-6295-4f0d-9874-51391de734b7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031377736s
Jan  4 14:48:27.295: INFO: Pod "pod-projected-configmaps-b8c346c9-6295-4f0d-9874-51391de734b7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044384297s
Jan  4 14:48:29.304: INFO: Pod "pod-projected-configmaps-b8c346c9-6295-4f0d-9874-51391de734b7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.053602161s
Jan  4 14:48:31.315: INFO: Pod "pod-projected-configmaps-b8c346c9-6295-4f0d-9874-51391de734b7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.064244946s
Jan  4 14:48:33.325: INFO: Pod "pod-projected-configmaps-b8c346c9-6295-4f0d-9874-51391de734b7": Phase="Pending", Reason="", readiness=false. Elapsed: 10.074140443s
Jan  4 14:48:35.336: INFO: Pod "pod-projected-configmaps-b8c346c9-6295-4f0d-9874-51391de734b7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.085473858s
STEP: Saw pod success
Jan  4 14:48:35.336: INFO: Pod "pod-projected-configmaps-b8c346c9-6295-4f0d-9874-51391de734b7" satisfied condition "success or failure"
Jan  4 14:48:35.349: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-b8c346c9-6295-4f0d-9874-51391de734b7 container projected-configmap-volume-test: 
STEP: delete the pod
Jan  4 14:48:35.459: INFO: Waiting for pod pod-projected-configmaps-b8c346c9-6295-4f0d-9874-51391de734b7 to disappear
Jan  4 14:48:35.479: INFO: Pod pod-projected-configmaps-b8c346c9-6295-4f0d-9874-51391de734b7 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 14:48:35.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9626" for this suite.
Jan  4 14:48:41.510: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 14:48:41.666: INFO: namespace projected-9626 deletion completed in 6.179068917s

• [SLOW TEST:18.647 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 14:48:41.666: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan  4 14:48:41.772: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2035b664-ad11-4797-b143-2ae43f242bec" in namespace "downward-api-9527" to be "success or failure"
Jan  4 14:48:41.801: INFO: Pod "downwardapi-volume-2035b664-ad11-4797-b143-2ae43f242bec": Phase="Pending", Reason="", readiness=false. Elapsed: 28.703021ms
Jan  4 14:48:43.811: INFO: Pod "downwardapi-volume-2035b664-ad11-4797-b143-2ae43f242bec": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039106534s
Jan  4 14:48:45.823: INFO: Pod "downwardapi-volume-2035b664-ad11-4797-b143-2ae43f242bec": Phase="Pending", Reason="", readiness=false. Elapsed: 4.05098946s
Jan  4 14:48:47.830: INFO: Pod "downwardapi-volume-2035b664-ad11-4797-b143-2ae43f242bec": Phase="Pending", Reason="", readiness=false. Elapsed: 6.057333509s
Jan  4 14:48:49.858: INFO: Pod "downwardapi-volume-2035b664-ad11-4797-b143-2ae43f242bec": Phase="Pending", Reason="", readiness=false. Elapsed: 8.086045513s
Jan  4 14:48:51.877: INFO: Pod "downwardapi-volume-2035b664-ad11-4797-b143-2ae43f242bec": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.104925846s
STEP: Saw pod success
Jan  4 14:48:51.878: INFO: Pod "downwardapi-volume-2035b664-ad11-4797-b143-2ae43f242bec" satisfied condition "success or failure"
Jan  4 14:48:51.884: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-2035b664-ad11-4797-b143-2ae43f242bec container client-container: 
STEP: delete the pod
Jan  4 14:48:51.952: INFO: Waiting for pod downwardapi-volume-2035b664-ad11-4797-b143-2ae43f242bec to disappear
Jan  4 14:48:51.957: INFO: Pod downwardapi-volume-2035b664-ad11-4797-b143-2ae43f242bec no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 14:48:51.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9527" for this suite.
Jan  4 14:48:58.029: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 14:48:58.171: INFO: namespace downward-api-9527 deletion completed in 6.209118668s

• [SLOW TEST:16.505 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 14:48:58.171: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
Jan  4 14:48:58.304: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-582,SelfLink:/api/v1/namespaces/watch-582/configmaps/e2e-watch-test-watch-closed,UID:2f7db603-b7f5-4830-8aeb-44d08b6217be,ResourceVersion:19280539,Generation:0,CreationTimestamp:2020-01-04 14:48:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan  4 14:48:58.304: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-582,SelfLink:/api/v1/namespaces/watch-582/configmaps/e2e-watch-test-watch-closed,UID:2f7db603-b7f5-4830-8aeb-44d08b6217be,ResourceVersion:19280540,Generation:0,CreationTimestamp:2020-01-04 14:48:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
Jan  4 14:48:58.836: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-582,SelfLink:/api/v1/namespaces/watch-582/configmaps/e2e-watch-test-watch-closed,UID:2f7db603-b7f5-4830-8aeb-44d08b6217be,ResourceVersion:19280541,Generation:0,CreationTimestamp:2020-01-04 14:48:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan  4 14:48:58.837: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-582,SelfLink:/api/v1/namespaces/watch-582/configmaps/e2e-watch-test-watch-closed,UID:2f7db603-b7f5-4830-8aeb-44d08b6217be,ResourceVersion:19280542,Generation:0,CreationTimestamp:2020-01-04 14:48:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 14:48:58.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-582" for this suite.
Jan  4 14:49:04.887: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 14:49:05.104: INFO: namespace watch-582 deletion completed in 6.249206053s

• [SLOW TEST:6.933 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-api-machinery] Secrets 
  should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 14:49:05.104: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name secret-emptykey-test-c739b5e4-7743-451d-801d-04a1e106b0f3
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 14:49:05.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2254" for this suite.
Jan  4 14:49:11.287: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 14:49:11.372: INFO: namespace secrets-2254 deletion completed in 6.148511918s

• [SLOW TEST:6.268 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 14:49:11.372: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating Redis RC
Jan  4 14:49:11.503: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2248'
Jan  4 14:49:11.895: INFO: stderr: ""
Jan  4 14:49:11.895: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Jan  4 14:49:12.902: INFO: Selector matched 1 pods for map[app:redis]
Jan  4 14:49:12.903: INFO: Found 0 / 1
Jan  4 14:49:13.914: INFO: Selector matched 1 pods for map[app:redis]
Jan  4 14:49:13.914: INFO: Found 0 / 1
Jan  4 14:49:14.904: INFO: Selector matched 1 pods for map[app:redis]
Jan  4 14:49:14.904: INFO: Found 0 / 1
Jan  4 14:49:15.902: INFO: Selector matched 1 pods for map[app:redis]
Jan  4 14:49:15.902: INFO: Found 0 / 1
Jan  4 14:49:16.902: INFO: Selector matched 1 pods for map[app:redis]
Jan  4 14:49:16.902: INFO: Found 0 / 1
Jan  4 14:49:17.975: INFO: Selector matched 1 pods for map[app:redis]
Jan  4 14:49:17.975: INFO: Found 0 / 1
Jan  4 14:49:18.906: INFO: Selector matched 1 pods for map[app:redis]
Jan  4 14:49:18.906: INFO: Found 0 / 1
Jan  4 14:49:19.909: INFO: Selector matched 1 pods for map[app:redis]
Jan  4 14:49:19.910: INFO: Found 0 / 1
Jan  4 14:49:20.907: INFO: Selector matched 1 pods for map[app:redis]
Jan  4 14:49:20.907: INFO: Found 0 / 1
Jan  4 14:49:22.021: INFO: Selector matched 1 pods for map[app:redis]
Jan  4 14:49:22.021: INFO: Found 0 / 1
Jan  4 14:49:22.906: INFO: Selector matched 1 pods for map[app:redis]
Jan  4 14:49:22.907: INFO: Found 1 / 1
Jan  4 14:49:22.907: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Jan  4 14:49:22.912: INFO: Selector matched 1 pods for map[app:redis]
Jan  4 14:49:22.912: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jan  4 14:49:22.912: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-lvfzh --namespace=kubectl-2248 -p {"metadata":{"annotations":{"x":"y"}}}'
Jan  4 14:49:23.074: INFO: stderr: ""
Jan  4 14:49:23.074: INFO: stdout: "pod/redis-master-lvfzh patched\n"
STEP: checking annotations
Jan  4 14:49:23.153: INFO: Selector matched 1 pods for map[app:redis]
Jan  4 14:49:23.153: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 14:49:23.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2248" for this suite.
Jan  4 14:50:01.195: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 14:50:01.272: INFO: namespace kubectl-2248 deletion completed in 38.114431723s

• [SLOW TEST:49.900 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl patch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should add annotations for pods in rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 14:50:01.273: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W0104 14:50:19.284133       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan  4 14:50:19.284: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 14:50:19.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-5340" for this suite.
Jan  4 14:50:30.806: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 14:50:35.839: INFO: namespace gc-5340 deletion completed in 16.546399964s

• [SLOW TEST:34.566 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 14:50:35.841: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Jan  4 14:50:41.408: INFO: Pod name wrapped-volume-race-4792de24-4d3c-4a8c-ad12-45f53b6198a7: Found 0 pods out of 5
Jan  4 14:50:46.425: INFO: Pod name wrapped-volume-race-4792de24-4d3c-4a8c-ad12-45f53b6198a7: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-4792de24-4d3c-4a8c-ad12-45f53b6198a7 in namespace emptydir-wrapper-5215, will wait for the garbage collector to delete the pods
Jan  4 14:51:26.538: INFO: Deleting ReplicationController wrapped-volume-race-4792de24-4d3c-4a8c-ad12-45f53b6198a7 took: 22.673495ms
Jan  4 14:51:26.839: INFO: Terminating ReplicationController wrapped-volume-race-4792de24-4d3c-4a8c-ad12-45f53b6198a7 pods took: 300.885356ms
STEP: Creating RC which spawns configmap-volume pods
Jan  4 14:52:08.783: INFO: Pod name wrapped-volume-race-6eeca8a1-d91b-4f55-b95e-d31c2d02b5f7: Found 0 pods out of 5
Jan  4 14:52:13.891: INFO: Pod name wrapped-volume-race-6eeca8a1-d91b-4f55-b95e-d31c2d02b5f7: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-6eeca8a1-d91b-4f55-b95e-d31c2d02b5f7 in namespace emptydir-wrapper-5215, will wait for the garbage collector to delete the pods
Jan  4 14:52:52.038: INFO: Deleting ReplicationController wrapped-volume-race-6eeca8a1-d91b-4f55-b95e-d31c2d02b5f7 took: 23.244924ms
Jan  4 14:52:52.539: INFO: Terminating ReplicationController wrapped-volume-race-6eeca8a1-d91b-4f55-b95e-d31c2d02b5f7 pods took: 500.79352ms
STEP: Creating RC which spawns configmap-volume pods
Jan  4 14:53:47.160: INFO: Pod name wrapped-volume-race-bfb43da6-737c-4892-a14d-8135698fcc29: Found 0 pods out of 5
Jan  4 14:53:52.171: INFO: Pod name wrapped-volume-race-bfb43da6-737c-4892-a14d-8135698fcc29: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-bfb43da6-737c-4892-a14d-8135698fcc29 in namespace emptydir-wrapper-5215, will wait for the garbage collector to delete the pods
Jan  4 14:54:30.273: INFO: Deleting ReplicationController wrapped-volume-race-bfb43da6-737c-4892-a14d-8135698fcc29 took: 15.168439ms
Jan  4 14:54:30.673: INFO: Terminating ReplicationController wrapped-volume-race-bfb43da6-737c-4892-a14d-8135698fcc29 pods took: 400.66693ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 14:55:28.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-5215" for this suite.
Jan  4 14:55:40.777: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 14:55:40.976: INFO: namespace emptydir-wrapper-5215 deletion completed in 12.228768453s

• [SLOW TEST:305.135 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 14:55:40.977: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-7621
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a new StatefulSet
Jan  4 14:55:41.231: INFO: Found 0 stateful pods, waiting for 3
Jan  4 14:55:51.241: INFO: Found 1 stateful pods, waiting for 3
Jan  4 14:56:01.248: INFO: Found 2 stateful pods, waiting for 3
Jan  4 14:56:11.240: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan  4 14:56:11.240: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan  4 14:56:11.240: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Jan  4 14:56:21.267: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan  4 14:56:21.267: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan  4 14:56:21.267: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Jan  4 14:56:21.296: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7621 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan  4 14:56:25.679: INFO: stderr: "I0104 14:56:25.403440    1316 log.go:172] (0xc0008189a0) (0xc0003b8780) Create stream\nI0104 14:56:25.403536    1316 log.go:172] (0xc0008189a0) (0xc0003b8780) Stream added, broadcasting: 1\nI0104 14:56:25.407956    1316 log.go:172] (0xc0008189a0) Reply frame received for 1\nI0104 14:56:25.408006    1316 log.go:172] (0xc0008189a0) (0xc0006b20a0) Create stream\nI0104 14:56:25.408022    1316 log.go:172] (0xc0008189a0) (0xc0006b20a0) Stream added, broadcasting: 3\nI0104 14:56:25.409565    1316 log.go:172] (0xc0008189a0) Reply frame received for 3\nI0104 14:56:25.409665    1316 log.go:172] (0xc0008189a0) (0xc0006d6000) Create stream\nI0104 14:56:25.409680    1316 log.go:172] (0xc0008189a0) (0xc0006d6000) Stream added, broadcasting: 5\nI0104 14:56:25.411080    1316 log.go:172] (0xc0008189a0) Reply frame received for 5\nI0104 14:56:25.519817    1316 log.go:172] (0xc0008189a0) Data frame received for 5\nI0104 14:56:25.519852    1316 log.go:172] (0xc0006d6000) (5) Data frame handling\nI0104 14:56:25.519870    1316 log.go:172] (0xc0006d6000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0104 14:56:25.560604    1316 log.go:172] (0xc0008189a0) Data frame received for 3\nI0104 14:56:25.560634    1316 log.go:172] (0xc0006b20a0) (3) Data frame handling\nI0104 14:56:25.560649    1316 log.go:172] (0xc0006b20a0) (3) Data frame sent\nI0104 14:56:25.666093    1316 log.go:172] (0xc0008189a0) (0xc0006b20a0) Stream removed, broadcasting: 3\nI0104 14:56:25.666209    1316 log.go:172] (0xc0008189a0) Data frame received for 1\nI0104 14:56:25.666257    1316 log.go:172] (0xc0003b8780) (1) Data frame handling\nI0104 14:56:25.666282    1316 log.go:172] (0xc0003b8780) (1) Data frame sent\nI0104 14:56:25.666358    1316 log.go:172] (0xc0008189a0) (0xc0006d6000) Stream removed, broadcasting: 5\nI0104 14:56:25.666421    1316 log.go:172] (0xc0008189a0) (0xc0003b8780) Stream removed, broadcasting: 1\nI0104 14:56:25.666458    1316 log.go:172] (0xc0008189a0) Go away received\nI0104 14:56:25.667671    1316 log.go:172] (0xc0008189a0) (0xc0003b8780) Stream removed, broadcasting: 1\nI0104 14:56:25.667739    1316 log.go:172] (0xc0008189a0) (0xc0006b20a0) Stream removed, broadcasting: 3\nI0104 14:56:25.667761    1316 log.go:172] (0xc0008189a0) (0xc0006d6000) Stream removed, broadcasting: 5\n"
Jan  4 14:56:25.679: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan  4 14:56:25.679: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Jan  4 14:56:37.270: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Jan  4 14:56:47.456: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7621 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  4 14:56:47.912: INFO: stderr: "I0104 14:56:47.705047    1347 log.go:172] (0xc00092a840) (0xc0008268c0) Create stream\nI0104 14:56:47.705112    1347 log.go:172] (0xc00092a840) (0xc0008268c0) Stream added, broadcasting: 1\nI0104 14:56:47.714715    1347 log.go:172] (0xc00092a840) Reply frame received for 1\nI0104 14:56:47.714754    1347 log.go:172] (0xc00092a840) (0xc0007c4000) Create stream\nI0104 14:56:47.714762    1347 log.go:172] (0xc00092a840) (0xc0007c4000) Stream added, broadcasting: 3\nI0104 14:56:47.715601    1347 log.go:172] (0xc00092a840) Reply frame received for 3\nI0104 14:56:47.715627    1347 log.go:172] (0xc00092a840) (0xc000826000) Create stream\nI0104 14:56:47.715634    1347 log.go:172] (0xc00092a840) (0xc000826000) Stream added, broadcasting: 5\nI0104 14:56:47.716413    1347 log.go:172] (0xc00092a840) Reply frame received for 5\nI0104 14:56:47.792611    1347 log.go:172] (0xc00092a840) Data frame received for 5\nI0104 14:56:47.792683    1347 log.go:172] (0xc000826000) (5) Data frame handling\nI0104 14:56:47.792706    1347 log.go:172] (0xc000826000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0104 14:56:47.796001    1347 log.go:172] (0xc00092a840) Data frame received for 3\nI0104 14:56:47.796275    1347 log.go:172] (0xc0007c4000) (3) Data frame handling\nI0104 14:56:47.796348    1347 log.go:172] (0xc0007c4000) (3) Data frame sent\nI0104 14:56:47.897458    1347 log.go:172] (0xc00092a840) Data frame received for 1\nI0104 14:56:47.897578    1347 log.go:172] (0xc0008268c0) (1) Data frame handling\nI0104 14:56:47.897598    1347 log.go:172] (0xc0008268c0) (1) Data frame sent\nI0104 14:56:47.897613    1347 log.go:172] (0xc00092a840) (0xc0008268c0) Stream removed, broadcasting: 1\nI0104 14:56:47.897886    1347 log.go:172] (0xc00092a840) (0xc0007c4000) Stream removed, broadcasting: 3\nI0104 14:56:47.898510    1347 log.go:172] (0xc00092a840) (0xc000826000) Stream removed, broadcasting: 5\nI0104 14:56:47.898622    1347 log.go:172] (0xc00092a840) (0xc0008268c0) Stream removed, broadcasting: 1\nI0104 14:56:47.898641    1347 log.go:172] (0xc00092a840) (0xc0007c4000) Stream removed, broadcasting: 3\nI0104 14:56:47.898662    1347 log.go:172] (0xc00092a840) (0xc000826000) Stream removed, broadcasting: 5\n"
Jan  4 14:56:47.912: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan  4 14:56:47.913: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan  4 14:56:57.981: INFO: Waiting for StatefulSet statefulset-7621/ss2 to complete update
Jan  4 14:56:57.981: INFO: Waiting for Pod statefulset-7621/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  4 14:56:57.981: INFO: Waiting for Pod statefulset-7621/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  4 14:56:57.981: INFO: Waiting for Pod statefulset-7621/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  4 14:57:08.906: INFO: Waiting for StatefulSet statefulset-7621/ss2 to complete update
Jan  4 14:57:08.906: INFO: Waiting for Pod statefulset-7621/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  4 14:57:08.906: INFO: Waiting for Pod statefulset-7621/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  4 14:57:18.108: INFO: Waiting for StatefulSet statefulset-7621/ss2 to complete update
Jan  4 14:57:18.108: INFO: Waiting for Pod statefulset-7621/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  4 14:57:18.108: INFO: Waiting for Pod statefulset-7621/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  4 14:57:30.410: INFO: Waiting for StatefulSet statefulset-7621/ss2 to complete update
Jan  4 14:57:30.410: INFO: Waiting for Pod statefulset-7621/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  4 14:57:30.410: INFO: Waiting for Pod statefulset-7621/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  4 14:57:49.608: INFO: Waiting for StatefulSet statefulset-7621/ss2 to complete update
Jan  4 14:57:49.608: INFO: Waiting for Pod statefulset-7621/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  4 14:57:58.031: INFO: Waiting for StatefulSet statefulset-7621/ss2 to complete update
Jan  4 14:57:58.031: INFO: Waiting for Pod statefulset-7621/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  4 14:58:08.006: INFO: Waiting for StatefulSet statefulset-7621/ss2 to complete update
STEP: Rolling back to a previous revision
Jan  4 14:58:18.001: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7621 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan  4 14:58:18.573: INFO: stderr: "I0104 14:58:18.187729    1365 log.go:172] (0xc000116dc0) (0xc0005e4960) Create stream\nI0104 14:58:18.187947    1365 log.go:172] (0xc000116dc0) (0xc0005e4960) Stream added, broadcasting: 1\nI0104 14:58:18.194443    1365 log.go:172] (0xc000116dc0) Reply frame received for 1\nI0104 14:58:18.194600    1365 log.go:172] (0xc000116dc0) (0xc000301ae0) Create stream\nI0104 14:58:18.194753    1365 log.go:172] (0xc000116dc0) (0xc000301ae0) Stream added, broadcasting: 3\nI0104 14:58:18.199351    1365 log.go:172] (0xc000116dc0) Reply frame received for 3\nI0104 14:58:18.199492    1365 log.go:172] (0xc000116dc0) (0xc0008d2000) Create stream\nI0104 14:58:18.199522    1365 log.go:172] (0xc000116dc0) (0xc0008d2000) Stream added, broadcasting: 5\nI0104 14:58:18.201662    1365 log.go:172] (0xc000116dc0) Reply frame received for 5\nI0104 14:58:18.382014    1365 log.go:172] (0xc000116dc0) Data frame received for 5\nI0104 14:58:18.382208    1365 log.go:172] (0xc0008d2000) (5) Data frame handling\nI0104 14:58:18.382289    1365 log.go:172] (0xc0008d2000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0104 14:58:18.460397    1365 log.go:172] (0xc000116dc0) Data frame received for 3\nI0104 14:58:18.460443    1365 log.go:172] (0xc000301ae0) (3) Data frame handling\nI0104 14:58:18.460463    1365 log.go:172] (0xc000301ae0) (3) Data frame sent\nI0104 14:58:18.561457    1365 log.go:172] (0xc000116dc0) (0xc000301ae0) Stream removed, broadcasting: 3\nI0104 14:58:18.561685    1365 log.go:172] (0xc000116dc0) Data frame received for 1\nI0104 14:58:18.561706    1365 log.go:172] (0xc0005e4960) (1) Data frame handling\nI0104 14:58:18.561750    1365 log.go:172] (0xc0005e4960) (1) Data frame sent\nI0104 14:58:18.561814    1365 log.go:172] (0xc000116dc0) (0xc0005e4960) Stream removed, broadcasting: 1\nI0104 14:58:18.562205    1365 log.go:172] (0xc000116dc0) (0xc0008d2000) Stream removed, broadcasting: 5\nI0104 14:58:18.562297    1365 log.go:172] (0xc000116dc0) Go away received\nI0104 14:58:18.563166    1365 log.go:172] (0xc000116dc0) (0xc0005e4960) Stream removed, broadcasting: 1\nI0104 14:58:18.563192    1365 log.go:172] (0xc000116dc0) (0xc000301ae0) Stream removed, broadcasting: 3\nI0104 14:58:18.563206    1365 log.go:172] (0xc000116dc0) (0xc0008d2000) Stream removed, broadcasting: 5\n"
Jan  4 14:58:18.574: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan  4 14:58:18.574: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan  4 14:58:28.638: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Jan  4 14:58:38.666: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7621 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  4 14:58:39.058: INFO: stderr: "I0104 14:58:38.882590    1385 log.go:172] (0xc00012a790) (0xc0005d6820) Create stream\nI0104 14:58:38.882751    1385 log.go:172] (0xc00012a790) (0xc0005d6820) Stream added, broadcasting: 1\nI0104 14:58:38.888588    1385 log.go:172] (0xc00012a790) Reply frame received for 1\nI0104 14:58:38.888687    1385 log.go:172] (0xc00012a790) (0xc00075e000) Create stream\nI0104 14:58:38.888720    1385 log.go:172] (0xc00012a790) (0xc00075e000) Stream added, broadcasting: 3\nI0104 14:58:38.891098    1385 log.go:172] (0xc00012a790) Reply frame received for 3\nI0104 14:58:38.891189    1385 log.go:172] (0xc00012a790) (0xc000352000) Create stream\nI0104 14:58:38.891212    1385 log.go:172] (0xc00012a790) (0xc000352000) Stream added, broadcasting: 5\nI0104 14:58:38.892918    1385 log.go:172] (0xc00012a790) Reply frame received for 5\nI0104 14:58:38.981754    1385 log.go:172] (0xc00012a790) Data frame received for 5\nI0104 14:58:38.981919    1385 log.go:172] (0xc000352000) (5) Data frame handling\nI0104 14:58:38.981964    1385 log.go:172] (0xc000352000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0104 14:58:38.982174    1385 log.go:172] (0xc00012a790) Data frame received for 3\nI0104 14:58:38.982242    1385 log.go:172] (0xc00075e000) (3) Data frame handling\nI0104 14:58:38.982323    1385 log.go:172] (0xc00075e000) (3) Data frame sent\nI0104 14:58:39.051599    1385 log.go:172] (0xc00012a790) Data frame received for 1\nI0104 14:58:39.051905    1385 log.go:172] (0xc00012a790) (0xc00075e000) Stream removed, broadcasting: 3\nI0104 14:58:39.052068    1385 log.go:172] (0xc0005d6820) (1) Data frame handling\nI0104 14:58:39.052112    1385 log.go:172] (0xc0005d6820) (1) Data frame sent\nI0104 14:58:39.052155    1385 log.go:172] (0xc00012a790) (0xc000352000) Stream removed, broadcasting: 5\nI0104 14:58:39.052207    1385 log.go:172] (0xc00012a790) (0xc0005d6820) Stream removed, broadcasting: 1\nI0104 14:58:39.052669    1385 log.go:172] (0xc00012a790) Go away received\nI0104 14:58:39.052709    1385 log.go:172] (0xc00012a790) (0xc0005d6820) Stream removed, broadcasting: 1\nI0104 14:58:39.052752    1385 log.go:172] (0xc00012a790) (0xc00075e000) Stream removed, broadcasting: 3\nI0104 14:58:39.052772    1385 log.go:172] (0xc00012a790) (0xc000352000) Stream removed, broadcasting: 5\n"
Jan  4 14:58:39.058: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan  4 14:58:39.058: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan  4 14:58:49.108: INFO: Waiting for StatefulSet statefulset-7621/ss2 to complete update
Jan  4 14:58:49.108: INFO: Waiting for Pod statefulset-7621/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan  4 14:58:49.108: INFO: Waiting for Pod statefulset-7621/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan  4 14:58:59.185: INFO: Waiting for StatefulSet statefulset-7621/ss2 to complete update
Jan  4 14:58:59.185: INFO: Waiting for Pod statefulset-7621/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan  4 14:58:59.185: INFO: Waiting for Pod statefulset-7621/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan  4 14:59:11.438: INFO: Waiting for StatefulSet statefulset-7621/ss2 to complete update
Jan  4 14:59:11.438: INFO: Waiting for Pod statefulset-7621/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan  4 14:59:19.121: INFO: Waiting for StatefulSet statefulset-7621/ss2 to complete update
Jan  4 14:59:19.121: INFO: Waiting for Pod statefulset-7621/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan  4 14:59:29.120: INFO: Waiting for StatefulSet statefulset-7621/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Jan  4 14:59:39.122: INFO: Deleting all statefulset in ns statefulset-7621
Jan  4 14:59:39.127: INFO: Scaling statefulset ss2 to 0
Jan  4 15:00:19.160: INFO: Waiting for statefulset status.replicas updated to 0
Jan  4 15:00:19.166: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 15:00:19.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-7621" for this suite.
Jan  4 15:00:27.278: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 15:00:27.367: INFO: namespace statefulset-7621 deletion completed in 8.147731879s

• [SLOW TEST:286.391 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should perform rolling updates and roll backs of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 15:00:27.368: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-53171964-50fb-47ec-9363-dea751561f88
STEP: Creating a pod to test consume configMaps
Jan  4 15:00:27.653: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-03d9b13b-5c8d-4104-9651-89402dfac002" in namespace "projected-8716" to be "success or failure"
Jan  4 15:00:27.660: INFO: Pod "pod-projected-configmaps-03d9b13b-5c8d-4104-9651-89402dfac002": Phase="Pending", Reason="", readiness=false. Elapsed: 7.488853ms
Jan  4 15:00:29.670: INFO: Pod "pod-projected-configmaps-03d9b13b-5c8d-4104-9651-89402dfac002": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017153948s
Jan  4 15:00:31.682: INFO: Pod "pod-projected-configmaps-03d9b13b-5c8d-4104-9651-89402dfac002": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029033279s
Jan  4 15:00:33.690: INFO: Pod "pod-projected-configmaps-03d9b13b-5c8d-4104-9651-89402dfac002": Phase="Pending", Reason="", readiness=false. Elapsed: 6.037168617s
Jan  4 15:00:35.697: INFO: Pod "pod-projected-configmaps-03d9b13b-5c8d-4104-9651-89402dfac002": Phase="Pending", Reason="", readiness=false. Elapsed: 8.043950082s
Jan  4 15:00:37.709: INFO: Pod "pod-projected-configmaps-03d9b13b-5c8d-4104-9651-89402dfac002": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.056771185s
STEP: Saw pod success
Jan  4 15:00:37.710: INFO: Pod "pod-projected-configmaps-03d9b13b-5c8d-4104-9651-89402dfac002" satisfied condition "success or failure"
Jan  4 15:00:37.715: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-03d9b13b-5c8d-4104-9651-89402dfac002 container projected-configmap-volume-test: 
STEP: delete the pod
Jan  4 15:00:37.835: INFO: Waiting for pod pod-projected-configmaps-03d9b13b-5c8d-4104-9651-89402dfac002 to disappear
Jan  4 15:00:37.844: INFO: Pod pod-projected-configmaps-03d9b13b-5c8d-4104-9651-89402dfac002 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 15:00:37.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8716" for this suite.
Jan  4 15:00:43.924: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 15:00:44.088: INFO: namespace projected-8716 deletion completed in 6.22907803s

• [SLOW TEST:16.720 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 15:00:44.088: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a service in the namespace
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there is no service in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 15:00:50.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-8621" for this suite.
Jan  4 15:00:56.935: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 15:00:57.106: INFO: namespace namespaces-8621 deletion completed in 6.192442952s
STEP: Destroying namespace "nsdeletetest-5602" for this suite.
Jan  4 15:00:57.111: INFO: Namespace nsdeletetest-5602 was already deleted
STEP: Destroying namespace "nsdeletetest-5134" for this suite.
Jan  4 15:01:03.163: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 15:01:03.297: INFO: namespace nsdeletetest-5134 deletion completed in 6.186160997s

• [SLOW TEST:19.209 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 15:01:03.297: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating service multi-endpoint-test in namespace services-1743
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1743 to expose endpoints map[]
Jan  4 15:01:03.472: INFO: successfully validated that service multi-endpoint-test in namespace services-1743 exposes endpoints map[] (19.911187ms elapsed)
STEP: Creating pod pod1 in namespace services-1743
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1743 to expose endpoints map[pod1:[100]]
Jan  4 15:01:07.627: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.130010717s elapsed, will retry)
Jan  4 15:01:11.695: INFO: successfully validated that service multi-endpoint-test in namespace services-1743 exposes endpoints map[pod1:[100]] (8.198372664s elapsed)
STEP: Creating pod pod2 in namespace services-1743
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1743 to expose endpoints map[pod1:[100] pod2:[101]]
Jan  4 15:01:16.909: INFO: Unexpected endpoints: found map[fd530f34-6c32-4493-9ecb-82d8bfa8fb86:[100]], expected map[pod1:[100] pod2:[101]] (5.195342189s elapsed, will retry)
Jan  4 15:01:22.260: INFO: successfully validated that service multi-endpoint-test in namespace services-1743 exposes endpoints map[pod1:[100] pod2:[101]] (10.546119978s elapsed)
STEP: Deleting pod pod1 in namespace services-1743
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1743 to expose endpoints map[pod2:[101]]
Jan  4 15:01:22.337: INFO: successfully validated that service multi-endpoint-test in namespace services-1743 exposes endpoints map[pod2:[101]] (55.951983ms elapsed)
STEP: Deleting pod pod2 in namespace services-1743
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1743 to expose endpoints map[]
Jan  4 15:01:22.370: INFO: successfully validated that service multi-endpoint-test in namespace services-1743 exposes endpoints map[] (10.372835ms elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 15:01:22.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-1743" for this suite.
Jan  4 15:01:46.495: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 15:01:46.623: INFO: namespace services-1743 deletion completed in 24.202725128s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:43.326 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Guestbook application 
  should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 15:01:46.624: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating all guestbook components
Jan  4 15:01:46.800: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-slave
  labels:
    app: redis
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: redis
    role: slave
    tier: backend

Jan  4 15:01:46.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5808'
Jan  4 15:01:47.476: INFO: stderr: ""
Jan  4 15:01:47.476: INFO: stdout: "service/redis-slave created\n"
Jan  4 15:01:47.476: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-master
  labels:
    app: redis
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: redis
    role: master
    tier: backend

Jan  4 15:01:47.477: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5808'
Jan  4 15:01:48.268: INFO: stderr: ""
Jan  4 15:01:48.268: INFO: stdout: "service/redis-master created\n"
Jan  4 15:01:48.269: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Jan  4 15:01:48.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5808'
Jan  4 15:01:49.031: INFO: stderr: ""
Jan  4 15:01:49.031: INFO: stdout: "service/frontend created\n"
Jan  4 15:01:49.032: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  selector:
    matchLabels:
      app: guestbook
      tier: frontend
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: php-redis
        image: gcr.io/google-samples/gb-frontend:v6
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access environment variables to find service host
          # info, comment out the 'value: dns' line above, and uncomment the
          # line below:
          # value: env
        ports:
        - containerPort: 80

Jan  4 15:01:49.032: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5808'
Jan  4 15:01:49.509: INFO: stderr: ""
Jan  4 15:01:49.510: INFO: stdout: "deployment.apps/frontend created\n"
Jan  4 15:01:49.510: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-master
spec:
  replicas: 1
  selector:
    matchLabels:
      app: redis
      role: master
      tier: backend
  template:
    metadata:
      labels:
        app: redis
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: gcr.io/kubernetes-e2e-test-images/redis:1.0
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Jan  4 15:01:49.510: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5808'
Jan  4 15:01:50.591: INFO: stderr: ""
Jan  4 15:01:50.591: INFO: stdout: "deployment.apps/redis-master created\n"
Jan  4 15:01:50.592: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-slave
spec:
  replicas: 2
  selector:
    matchLabels:
      app: redis
      role: slave
      tier: backend
  template:
    metadata:
      labels:
        app: redis
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/google-samples/gb-redisslave:v3
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access an environment variable to find the master
          # service's host, comment out the 'value: dns' line above, and
          # uncomment the line below:
          # value: env
        ports:
        - containerPort: 6379

Jan  4 15:01:50.592: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5808'
Jan  4 15:01:51.179: INFO: stderr: ""
Jan  4 15:01:51.179: INFO: stdout: "deployment.apps/redis-slave created\n"
STEP: validating guestbook app
Jan  4 15:01:51.179: INFO: Waiting for all frontend pods to be Running.
Jan  4 15:02:21.231: INFO: Waiting for frontend to serve content.
Jan  4 15:02:24.096: INFO: Trying to add a new entry to the guestbook.
Jan  4 15:02:24.283: INFO: Verifying that added entry can be retrieved.
STEP: using delete to clean up resources
Jan  4 15:02:24.333: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5808'
Jan  4 15:02:24.549: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  4 15:02:24.550: INFO: stdout: "service \"redis-slave\" force deleted\n"
STEP: using delete to clean up resources
Jan  4 15:02:24.552: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5808'
Jan  4 15:02:24.734: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  4 15:02:24.734: INFO: stdout: "service \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Jan  4 15:02:24.736: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5808'
Jan  4 15:02:24.958: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  4 15:02:24.958: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Jan  4 15:02:24.960: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5808'
Jan  4 15:02:25.205: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  4 15:02:25.205: INFO: stdout: "deployment.apps \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Jan  4 15:02:25.206: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5808'
Jan  4 15:02:25.388: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  4 15:02:25.389: INFO: stdout: "deployment.apps \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Jan  4 15:02:25.390: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5808'
Jan  4 15:02:25.771: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  4 15:02:25.771: INFO: stdout: "deployment.apps \"redis-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 15:02:25.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5808" for this suite.
Jan  4 15:03:08.074: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 15:03:08.165: INFO: namespace kubectl-5808 deletion completed in 42.382640719s

• [SLOW TEST:81.542 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Guestbook application
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create and stop a working application  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 15:03:08.166: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Jan  4 15:03:16.798: INFO: Successfully updated pod "labelsupdate9c7bdd89-4aef-4687-a240-378b143b40b8"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 15:03:18.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8423" for this suite.
Jan  4 15:03:40.924: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 15:03:41.040: INFO: namespace downward-api-8423 deletion completed in 22.150548086s

• [SLOW TEST:32.874 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 15:03:41.041: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Jan  4 15:04:05.270: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan  4 15:04:05.296: INFO: Pod pod-with-prestop-http-hook still exists
Jan  4 15:04:07.297: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan  4 15:04:07.314: INFO: Pod pod-with-prestop-http-hook still exists
Jan  4 15:04:09.299: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan  4 15:04:09.323: INFO: Pod pod-with-prestop-http-hook still exists
Jan  4 15:04:11.297: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan  4 15:04:11.318: INFO: Pod pod-with-prestop-http-hook still exists
Jan  4 15:04:13.297: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan  4 15:04:13.306: INFO: Pod pod-with-prestop-http-hook still exists
Jan  4 15:04:15.297: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan  4 15:04:15.306: INFO: Pod pod-with-prestop-http-hook still exists
Jan  4 15:04:17.297: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan  4 15:04:17.306: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 15:04:17.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-5409" for this suite.
Jan  4 15:04:39.369: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 15:04:39.479: INFO: namespace container-lifecycle-hook-5409 deletion completed in 22.135291536s

• [SLOW TEST:58.439 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 15:04:39.480: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating secret secrets-5915/secret-test-6d49c0e5-5011-4860-a006-83d7d5ea9154
STEP: Creating a pod to test consume secrets
Jan  4 15:04:39.625: INFO: Waiting up to 5m0s for pod "pod-configmaps-247fd5d8-db74-4f71-9d64-b5f1d3676e1e" in namespace "secrets-5915" to be "success or failure"
Jan  4 15:04:39.655: INFO: Pod "pod-configmaps-247fd5d8-db74-4f71-9d64-b5f1d3676e1e": Phase="Pending", Reason="", readiness=false. Elapsed: 28.904171ms
Jan  4 15:04:41.663: INFO: Pod "pod-configmaps-247fd5d8-db74-4f71-9d64-b5f1d3676e1e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037362807s
Jan  4 15:04:43.674: INFO: Pod "pod-configmaps-247fd5d8-db74-4f71-9d64-b5f1d3676e1e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.048252765s
Jan  4 15:04:45.681: INFO: Pod "pod-configmaps-247fd5d8-db74-4f71-9d64-b5f1d3676e1e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.055743233s
Jan  4 15:04:47.690: INFO: Pod "pod-configmaps-247fd5d8-db74-4f71-9d64-b5f1d3676e1e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.064369303s
Jan  4 15:04:49.707: INFO: Pod "pod-configmaps-247fd5d8-db74-4f71-9d64-b5f1d3676e1e": Phase="Pending", Reason="", readiness=false. Elapsed: 10.08094271s
Jan  4 15:04:51.717: INFO: Pod "pod-configmaps-247fd5d8-db74-4f71-9d64-b5f1d3676e1e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.091119435s
STEP: Saw pod success
Jan  4 15:04:51.717: INFO: Pod "pod-configmaps-247fd5d8-db74-4f71-9d64-b5f1d3676e1e" satisfied condition "success or failure"
Jan  4 15:04:51.721: INFO: Trying to get logs from node iruya-node pod pod-configmaps-247fd5d8-db74-4f71-9d64-b5f1d3676e1e container env-test: 
STEP: delete the pod
Jan  4 15:04:51.754: INFO: Waiting for pod pod-configmaps-247fd5d8-db74-4f71-9d64-b5f1d3676e1e to disappear
Jan  4 15:04:51.784: INFO: Pod pod-configmaps-247fd5d8-db74-4f71-9d64-b5f1d3676e1e no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 15:04:51.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5915" for this suite.
Jan  4 15:04:57.832: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 15:04:57.993: INFO: namespace secrets-5915 deletion completed in 6.186641063s

• [SLOW TEST:18.513 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 15:04:57.993: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-8452
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan  4 15:04:58.079: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan  4 15:05:42.435: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=udp&host=10.44.0.1&port=8081&tries=1'] Namespace:pod-network-test-8452 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  4 15:05:42.435: INFO: >>> kubeConfig: /root/.kube/config
I0104 15:05:42.549874       8 log.go:172] (0xc0018a8000) (0xc0012215e0) Create stream
I0104 15:05:42.550134       8 log.go:172] (0xc0018a8000) (0xc0012215e0) Stream added, broadcasting: 1
I0104 15:05:42.567752       8 log.go:172] (0xc0018a8000) Reply frame received for 1
I0104 15:05:42.568232       8 log.go:172] (0xc0018a8000) (0xc00110cc80) Create stream
I0104 15:05:42.568261       8 log.go:172] (0xc0018a8000) (0xc00110cc80) Stream added, broadcasting: 3
I0104 15:05:42.574791       8 log.go:172] (0xc0018a8000) Reply frame received for 3
I0104 15:05:42.574922       8 log.go:172] (0xc0018a8000) (0xc00110cd20) Create stream
I0104 15:05:42.574931       8 log.go:172] (0xc0018a8000) (0xc00110cd20) Stream added, broadcasting: 5
I0104 15:05:42.587409       8 log.go:172] (0xc0018a8000) Reply frame received for 5
I0104 15:05:42.963409       8 log.go:172] (0xc0018a8000) Data frame received for 3
I0104 15:05:42.963484       8 log.go:172] (0xc00110cc80) (3) Data frame handling
I0104 15:05:42.963502       8 log.go:172] (0xc00110cc80) (3) Data frame sent
I0104 15:05:43.096124       8 log.go:172] (0xc0018a8000) Data frame received for 1
I0104 15:05:43.096288       8 log.go:172] (0xc0018a8000) (0xc00110cc80) Stream removed, broadcasting: 3
I0104 15:05:43.096398       8 log.go:172] (0xc0012215e0) (1) Data frame handling
I0104 15:05:43.096430       8 log.go:172] (0xc0012215e0) (1) Data frame sent
I0104 15:05:43.096448       8 log.go:172] (0xc0018a8000) (0xc0012215e0) Stream removed, broadcasting: 1
I0104 15:05:43.096766       8 log.go:172] (0xc0018a8000) (0xc00110cd20) Stream removed, broadcasting: 5
I0104 15:05:43.096798       8 log.go:172] (0xc0018a8000) (0xc0012215e0) Stream removed, broadcasting: 1
I0104 15:05:43.096815       8 log.go:172] (0xc0018a8000) (0xc00110cc80) Stream removed, broadcasting: 3
I0104 15:05:43.096830       8 log.go:172] (0xc0018a8000) (0xc00110cd20) Stream removed, broadcasting: 5
I0104 15:05:43.097236       8 log.go:172] (0xc0018a8000) Go away received
Jan  4 15:05:43.097: INFO: Waiting for endpoints: map[]
Jan  4 15:05:43.108: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:pod-network-test-8452 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  4 15:05:43.108: INFO: >>> kubeConfig: /root/.kube/config
I0104 15:05:43.170926       8 log.go:172] (0xc0011b86e0) (0xc0002dcbe0) Create stream
I0104 15:05:43.171071       8 log.go:172] (0xc0011b86e0) (0xc0002dcbe0) Stream added, broadcasting: 1
I0104 15:05:43.177978       8 log.go:172] (0xc0011b86e0) Reply frame received for 1
I0104 15:05:43.178016       8 log.go:172] (0xc0011b86e0) (0xc001dc8460) Create stream
I0104 15:05:43.178025       8 log.go:172] (0xc0011b86e0) (0xc001dc8460) Stream added, broadcasting: 3
I0104 15:05:43.181554       8 log.go:172] (0xc0011b86e0) Reply frame received for 3
I0104 15:05:43.181581       8 log.go:172] (0xc0011b86e0) (0xc00110ce60) Create stream
I0104 15:05:43.181592       8 log.go:172] (0xc0011b86e0) (0xc00110ce60) Stream added, broadcasting: 5
I0104 15:05:43.185229       8 log.go:172] (0xc0011b86e0) Reply frame received for 5
I0104 15:05:43.327600       8 log.go:172] (0xc0011b86e0) Data frame received for 3
I0104 15:05:43.327649       8 log.go:172] (0xc001dc8460) (3) Data frame handling
I0104 15:05:43.327664       8 log.go:172] (0xc001dc8460) (3) Data frame sent
I0104 15:05:43.473244       8 log.go:172] (0xc0011b86e0) Data frame received for 1
I0104 15:05:43.473433       8 log.go:172] (0xc0011b86e0) (0xc001dc8460) Stream removed, broadcasting: 3
I0104 15:05:43.473532       8 log.go:172] (0xc0002dcbe0) (1) Data frame handling
I0104 15:05:43.473553       8 log.go:172] (0xc0002dcbe0) (1) Data frame sent
I0104 15:05:43.473586       8 log.go:172] (0xc0011b86e0) (0xc0002dcbe0) Stream removed, broadcasting: 1
I0104 15:05:43.473651       8 log.go:172] (0xc0011b86e0) (0xc00110ce60) Stream removed, broadcasting: 5
I0104 15:05:43.473752       8 log.go:172] (0xc0011b86e0) Go away received
I0104 15:05:43.474180       8 log.go:172] (0xc0011b86e0) (0xc0002dcbe0) Stream removed, broadcasting: 1
I0104 15:05:43.474199       8 log.go:172] (0xc0011b86e0) (0xc001dc8460) Stream removed, broadcasting: 3
I0104 15:05:43.474211       8 log.go:172] (0xc0011b86e0) (0xc00110ce60) Stream removed, broadcasting: 5
Jan  4 15:05:43.474: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 15:05:43.475: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-8452" for this suite.
Jan  4 15:06:07.509: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 15:06:07.672: INFO: namespace pod-network-test-8452 deletion completed in 24.188323472s

• [SLOW TEST:69.678 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run deployment 
  should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 15:06:07.673: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1557
[It] should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan  4 15:06:07.887: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/apps.v1 --namespace=kubectl-1110'
Jan  4 15:06:08.071: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan  4 15:06:08.071: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the deployment e2e-test-nginx-deployment was created
STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created
[AfterEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1562
Jan  4 15:06:10.193: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-1110'
Jan  4 15:06:10.329: INFO: stderr: ""
Jan  4 15:06:10.329: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 15:06:10.329: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1110" for this suite.
Jan  4 15:06:16.424: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 15:06:16.527: INFO: namespace kubectl-1110 deletion completed in 6.187254847s

• [SLOW TEST:8.855 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 15:06:16.528: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Jan  4 15:06:25.255: INFO: Successfully updated pod "labelsupdate94ed5827-2ba4-4d38-b17a-ce21cf4baa2f"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 15:06:27.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1151" for this suite.
Jan  4 15:06:49.384: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 15:06:49.604: INFO: namespace projected-1151 deletion completed in 22.270047498s

• [SLOW TEST:33.076 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 15:06:49.604: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-39b61e40-276b-4770-ad1c-01dfc7532193
STEP: Creating a pod to test consume configMaps
Jan  4 15:06:49.720: INFO: Waiting up to 5m0s for pod "pod-configmaps-ba2b3a4b-5d00-432d-891e-020b64f6d0ac" in namespace "configmap-9946" to be "success or failure"
Jan  4 15:06:49.743: INFO: Pod "pod-configmaps-ba2b3a4b-5d00-432d-891e-020b64f6d0ac": Phase="Pending", Reason="", readiness=false. Elapsed: 22.058217ms
Jan  4 15:06:51.753: INFO: Pod "pod-configmaps-ba2b3a4b-5d00-432d-891e-020b64f6d0ac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03191082s
Jan  4 15:06:53.792: INFO: Pod "pod-configmaps-ba2b3a4b-5d00-432d-891e-020b64f6d0ac": Phase="Pending", Reason="", readiness=false. Elapsed: 4.071573645s
Jan  4 15:06:55.812: INFO: Pod "pod-configmaps-ba2b3a4b-5d00-432d-891e-020b64f6d0ac": Phase="Pending", Reason="", readiness=false. Elapsed: 6.091677955s
Jan  4 15:06:57.822: INFO: Pod "pod-configmaps-ba2b3a4b-5d00-432d-891e-020b64f6d0ac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.101022404s
STEP: Saw pod success
Jan  4 15:06:57.822: INFO: Pod "pod-configmaps-ba2b3a4b-5d00-432d-891e-020b64f6d0ac" satisfied condition "success or failure"
Jan  4 15:06:57.825: INFO: Trying to get logs from node iruya-node pod pod-configmaps-ba2b3a4b-5d00-432d-891e-020b64f6d0ac container configmap-volume-test: 
STEP: delete the pod
Jan  4 15:06:57.895: INFO: Waiting for pod pod-configmaps-ba2b3a4b-5d00-432d-891e-020b64f6d0ac to disappear
Jan  4 15:06:57.904: INFO: Pod pod-configmaps-ba2b3a4b-5d00-432d-891e-020b64f6d0ac no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 15:06:57.905: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9946" for this suite.
Jan  4 15:07:04.003: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 15:07:04.107: INFO: namespace configmap-9946 deletion completed in 6.193017218s

• [SLOW TEST:14.503 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 15:07:04.108: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan  4 15:07:04.206: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b99c6d62-e968-42ae-af45-7f9bb55b2146" in namespace "projected-6420" to be "success or failure"
Jan  4 15:07:04.220: INFO: Pod "downwardapi-volume-b99c6d62-e968-42ae-af45-7f9bb55b2146": Phase="Pending", Reason="", readiness=false. Elapsed: 13.551051ms
Jan  4 15:07:06.226: INFO: Pod "downwardapi-volume-b99c6d62-e968-42ae-af45-7f9bb55b2146": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020140115s
Jan  4 15:07:08.240: INFO: Pod "downwardapi-volume-b99c6d62-e968-42ae-af45-7f9bb55b2146": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033893101s
Jan  4 15:07:10.255: INFO: Pod "downwardapi-volume-b99c6d62-e968-42ae-af45-7f9bb55b2146": Phase="Pending", Reason="", readiness=false. Elapsed: 6.049147517s
Jan  4 15:07:12.264: INFO: Pod "downwardapi-volume-b99c6d62-e968-42ae-af45-7f9bb55b2146": Phase="Running", Reason="", readiness=true. Elapsed: 8.058359726s
Jan  4 15:07:14.280: INFO: Pod "downwardapi-volume-b99c6d62-e968-42ae-af45-7f9bb55b2146": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.073434098s
STEP: Saw pod success
Jan  4 15:07:14.280: INFO: Pod "downwardapi-volume-b99c6d62-e968-42ae-af45-7f9bb55b2146" satisfied condition "success or failure"
Jan  4 15:07:14.285: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-b99c6d62-e968-42ae-af45-7f9bb55b2146 container client-container: 
STEP: delete the pod
Jan  4 15:07:14.344: INFO: Waiting for pod downwardapi-volume-b99c6d62-e968-42ae-af45-7f9bb55b2146 to disappear
Jan  4 15:07:14.349: INFO: Pod downwardapi-volume-b99c6d62-e968-42ae-af45-7f9bb55b2146 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 15:07:14.349: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6420" for this suite.
Jan  4 15:07:20.382: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 15:07:20.532: INFO: namespace projected-6420 deletion completed in 6.177922332s

• [SLOW TEST:16.424 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 15:07:20.532: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override arguments
Jan  4 15:07:20.701: INFO: Waiting up to 5m0s for pod "client-containers-552212fa-420e-4f12-82f5-a6bb81f4f23e" in namespace "containers-7602" to be "success or failure"
Jan  4 15:07:20.713: INFO: Pod "client-containers-552212fa-420e-4f12-82f5-a6bb81f4f23e": Phase="Pending", Reason="", readiness=false. Elapsed: 11.92907ms
Jan  4 15:07:22.723: INFO: Pod "client-containers-552212fa-420e-4f12-82f5-a6bb81f4f23e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022561177s
Jan  4 15:07:24.731: INFO: Pod "client-containers-552212fa-420e-4f12-82f5-a6bb81f4f23e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030654264s
Jan  4 15:07:26.739: INFO: Pod "client-containers-552212fa-420e-4f12-82f5-a6bb81f4f23e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.038690258s
Jan  4 15:07:28.747: INFO: Pod "client-containers-552212fa-420e-4f12-82f5-a6bb81f4f23e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.046520921s
Jan  4 15:07:30.767: INFO: Pod "client-containers-552212fa-420e-4f12-82f5-a6bb81f4f23e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.066113652s
STEP: Saw pod success
Jan  4 15:07:30.767: INFO: Pod "client-containers-552212fa-420e-4f12-82f5-a6bb81f4f23e" satisfied condition "success or failure"
Jan  4 15:07:30.781: INFO: Trying to get logs from node iruya-node pod client-containers-552212fa-420e-4f12-82f5-a6bb81f4f23e container test-container: 
STEP: delete the pod
Jan  4 15:07:30.923: INFO: Waiting for pod client-containers-552212fa-420e-4f12-82f5-a6bb81f4f23e to disappear
Jan  4 15:07:30.934: INFO: Pod client-containers-552212fa-420e-4f12-82f5-a6bb81f4f23e no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 15:07:30.934: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-7602" for this suite.
Jan  4 15:07:36.962: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 15:07:37.052: INFO: namespace containers-7602 deletion completed in 6.109795667s

• [SLOW TEST:16.520 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 15:07:37.053: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Jan  4 15:07:37.258: INFO: Number of nodes with available pods: 0
Jan  4 15:07:37.258: INFO: Node iruya-node is running more than one daemon pod
Jan  4 15:07:39.099: INFO: Number of nodes with available pods: 0
Jan  4 15:07:39.100: INFO: Node iruya-node is running more than one daemon pod
Jan  4 15:07:39.583: INFO: Number of nodes with available pods: 0
Jan  4 15:07:39.584: INFO: Node iruya-node is running more than one daemon pod
Jan  4 15:07:40.621: INFO: Number of nodes with available pods: 0
Jan  4 15:07:40.622: INFO: Node iruya-node is running more than one daemon pod
Jan  4 15:07:41.337: INFO: Number of nodes with available pods: 0
Jan  4 15:07:41.337: INFO: Node iruya-node is running more than one daemon pod
Jan  4 15:07:42.279: INFO: Number of nodes with available pods: 0
Jan  4 15:07:42.279: INFO: Node iruya-node is running more than one daemon pod
Jan  4 15:07:43.927: INFO: Number of nodes with available pods: 0
Jan  4 15:07:43.927: INFO: Node iruya-node is running more than one daemon pod
Jan  4 15:07:44.341: INFO: Number of nodes with available pods: 0
Jan  4 15:07:44.341: INFO: Node iruya-node is running more than one daemon pod
Jan  4 15:07:45.277: INFO: Number of nodes with available pods: 0
Jan  4 15:07:45.277: INFO: Node iruya-node is running more than one daemon pod
Jan  4 15:07:46.278: INFO: Number of nodes with available pods: 0
Jan  4 15:07:46.278: INFO: Node iruya-node is running more than one daemon pod
Jan  4 15:07:47.280: INFO: Number of nodes with available pods: 1
Jan  4 15:07:47.280: INFO: Node iruya-node is running more than one daemon pod
Jan  4 15:07:48.268: INFO: Number of nodes with available pods: 2
Jan  4 15:07:48.268: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Stop a daemon pod, check that the daemon pod is revived.
Jan  4 15:07:48.316: INFO: Number of nodes with available pods: 1
Jan  4 15:07:48.316: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan  4 15:07:49.333: INFO: Number of nodes with available pods: 1
Jan  4 15:07:49.333: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan  4 15:07:50.367: INFO: Number of nodes with available pods: 1
Jan  4 15:07:50.367: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan  4 15:07:51.332: INFO: Number of nodes with available pods: 1
Jan  4 15:07:51.332: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan  4 15:07:52.327: INFO: Number of nodes with available pods: 1
Jan  4 15:07:52.327: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan  4 15:07:53.327: INFO: Number of nodes with available pods: 1
Jan  4 15:07:53.327: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan  4 15:07:54.336: INFO: Number of nodes with available pods: 1
Jan  4 15:07:54.336: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan  4 15:07:55.333: INFO: Number of nodes with available pods: 1
Jan  4 15:07:55.333: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan  4 15:07:56.331: INFO: Number of nodes with available pods: 1
Jan  4 15:07:56.331: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan  4 15:07:57.329: INFO: Number of nodes with available pods: 1
Jan  4 15:07:57.329: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan  4 15:07:58.328: INFO: Number of nodes with available pods: 1
Jan  4 15:07:58.328: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan  4 15:07:59.335: INFO: Number of nodes with available pods: 1
Jan  4 15:07:59.335: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan  4 15:08:00.329: INFO: Number of nodes with available pods: 1
Jan  4 15:08:00.330: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan  4 15:08:01.429: INFO: Number of nodes with available pods: 1
Jan  4 15:08:01.430: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan  4 15:08:02.743: INFO: Number of nodes with available pods: 1
Jan  4 15:08:02.743: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan  4 15:08:03.483: INFO: Number of nodes with available pods: 1
Jan  4 15:08:03.484: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan  4 15:08:04.335: INFO: Number of nodes with available pods: 1
Jan  4 15:08:04.335: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan  4 15:08:05.333: INFO: Number of nodes with available pods: 1
Jan  4 15:08:05.333: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan  4 15:08:06.345: INFO: Number of nodes with available pods: 2
Jan  4 15:08:06.345: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9585, will wait for the garbage collector to delete the pods
Jan  4 15:08:06.405: INFO: Deleting DaemonSet.extensions daemon-set took: 6.764079ms
Jan  4 15:08:06.706: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.469956ms
Jan  4 15:08:17.912: INFO: Number of nodes with available pods: 0
Jan  4 15:08:17.912: INFO: Number of running nodes: 0, number of available pods: 0
Jan  4 15:08:17.915: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-9585/daemonsets","resourceVersion":"19284072"},"items":null}

Jan  4 15:08:17.918: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9585/pods","resourceVersion":"19284072"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 15:08:17.934: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-9585" for this suite.
Jan  4 15:08:23.987: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 15:08:24.077: INFO: namespace daemonsets-9585 deletion completed in 6.137896537s

• [SLOW TEST:47.024 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 15:08:24.078: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-341db8ba-b1f8-4d76-8466-98f1810156c5
STEP: Creating a pod to test consume configMaps
Jan  4 15:08:24.249: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-6e1a576d-927c-44a8-9bcc-997bac4ea976" in namespace "projected-3516" to be "success or failure"
Jan  4 15:08:24.264: INFO: Pod "pod-projected-configmaps-6e1a576d-927c-44a8-9bcc-997bac4ea976": Phase="Pending", Reason="", readiness=false. Elapsed: 14.858431ms
Jan  4 15:08:26.275: INFO: Pod "pod-projected-configmaps-6e1a576d-927c-44a8-9bcc-997bac4ea976": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025635999s
Jan  4 15:08:28.287: INFO: Pod "pod-projected-configmaps-6e1a576d-927c-44a8-9bcc-997bac4ea976": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037792893s
Jan  4 15:08:30.302: INFO: Pod "pod-projected-configmaps-6e1a576d-927c-44a8-9bcc-997bac4ea976": Phase="Pending", Reason="", readiness=false. Elapsed: 6.052237021s
Jan  4 15:08:32.312: INFO: Pod "pod-projected-configmaps-6e1a576d-927c-44a8-9bcc-997bac4ea976": Phase="Pending", Reason="", readiness=false. Elapsed: 8.062962305s
Jan  4 15:08:34.328: INFO: Pod "pod-projected-configmaps-6e1a576d-927c-44a8-9bcc-997bac4ea976": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.078275291s
STEP: Saw pod success
Jan  4 15:08:34.328: INFO: Pod "pod-projected-configmaps-6e1a576d-927c-44a8-9bcc-997bac4ea976" satisfied condition "success or failure"
Jan  4 15:08:34.333: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-6e1a576d-927c-44a8-9bcc-997bac4ea976 container projected-configmap-volume-test: 
STEP: delete the pod
Jan  4 15:08:34.452: INFO: Waiting for pod pod-projected-configmaps-6e1a576d-927c-44a8-9bcc-997bac4ea976 to disappear
Jan  4 15:08:34.458: INFO: Pod pod-projected-configmaps-6e1a576d-927c-44a8-9bcc-997bac4ea976 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 15:08:34.459: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3516" for this suite.
Jan  4 15:08:40.547: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 15:08:40.699: INFO: namespace projected-3516 deletion completed in 6.231564106s

• [SLOW TEST:16.622 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 15:08:40.700: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-map-1bdfb147-a99b-43b2-be0d-94f4e77f34da
STEP: Creating a pod to test consume secrets
Jan  4 15:08:40.832: INFO: Waiting up to 5m0s for pod "pod-secrets-54a7b57a-8914-4545-99b0-ac65d20cbf5c" in namespace "secrets-7536" to be "success or failure"
Jan  4 15:08:40.842: INFO: Pod "pod-secrets-54a7b57a-8914-4545-99b0-ac65d20cbf5c": Phase="Pending", Reason="", readiness=false. Elapsed: 9.467082ms
Jan  4 15:08:43.475: INFO: Pod "pod-secrets-54a7b57a-8914-4545-99b0-ac65d20cbf5c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.642417664s
Jan  4 15:08:45.483: INFO: Pod "pod-secrets-54a7b57a-8914-4545-99b0-ac65d20cbf5c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.650434775s
Jan  4 15:08:47.495: INFO: Pod "pod-secrets-54a7b57a-8914-4545-99b0-ac65d20cbf5c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.663056935s
Jan  4 15:08:49.507: INFO: Pod "pod-secrets-54a7b57a-8914-4545-99b0-ac65d20cbf5c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.674477642s
Jan  4 15:08:51.515: INFO: Pod "pod-secrets-54a7b57a-8914-4545-99b0-ac65d20cbf5c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.682318056s
STEP: Saw pod success
Jan  4 15:08:51.515: INFO: Pod "pod-secrets-54a7b57a-8914-4545-99b0-ac65d20cbf5c" satisfied condition "success or failure"
Jan  4 15:08:51.519: INFO: Trying to get logs from node iruya-node pod pod-secrets-54a7b57a-8914-4545-99b0-ac65d20cbf5c container secret-volume-test: 
STEP: delete the pod
Jan  4 15:08:51.638: INFO: Waiting for pod pod-secrets-54a7b57a-8914-4545-99b0-ac65d20cbf5c to disappear
Jan  4 15:08:51.657: INFO: Pod pod-secrets-54a7b57a-8914-4545-99b0-ac65d20cbf5c no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 15:08:51.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-7536" for this suite.
Jan  4 15:08:57.717: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 15:08:57.928: INFO: namespace secrets-7536 deletion completed in 6.255313915s

• [SLOW TEST:17.229 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 15:08:57.929: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan  4 15:08:58.078: INFO: Waiting up to 5m0s for pod "downwardapi-volume-04a79eb5-de36-4691-94a2-489a3f1ae1e0" in namespace "projected-3452" to be "success or failure"
Jan  4 15:08:58.088: INFO: Pod "downwardapi-volume-04a79eb5-de36-4691-94a2-489a3f1ae1e0": Phase="Pending", Reason="", readiness=false. Elapsed: 10.493331ms
Jan  4 15:09:00.113: INFO: Pod "downwardapi-volume-04a79eb5-de36-4691-94a2-489a3f1ae1e0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03509236s
Jan  4 15:09:02.126: INFO: Pod "downwardapi-volume-04a79eb5-de36-4691-94a2-489a3f1ae1e0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.048571183s
Jan  4 15:09:04.136: INFO: Pod "downwardapi-volume-04a79eb5-de36-4691-94a2-489a3f1ae1e0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.058428155s
Jan  4 15:09:06.144: INFO: Pod "downwardapi-volume-04a79eb5-de36-4691-94a2-489a3f1ae1e0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.066212894s
STEP: Saw pod success
Jan  4 15:09:06.144: INFO: Pod "downwardapi-volume-04a79eb5-de36-4691-94a2-489a3f1ae1e0" satisfied condition "success or failure"
Jan  4 15:09:06.149: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-04a79eb5-de36-4691-94a2-489a3f1ae1e0 container client-container: 
STEP: delete the pod
Jan  4 15:09:06.243: INFO: Waiting for pod downwardapi-volume-04a79eb5-de36-4691-94a2-489a3f1ae1e0 to disappear
Jan  4 15:09:06.277: INFO: Pod downwardapi-volume-04a79eb5-de36-4691-94a2-489a3f1ae1e0 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 15:09:06.277: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3452" for this suite.
Jan  4 15:09:12.340: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 15:09:12.453: INFO: namespace projected-3452 deletion completed in 6.151959366s

• [SLOW TEST:14.524 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 15:09:12.455: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W0104 15:09:24.635089       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan  4 15:09:24.635: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 15:09:24.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-8187" for this suite.
Jan  4 15:09:30.664: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 15:09:30.773: INFO: namespace gc-8187 deletion completed in 6.132303665s

• [SLOW TEST:18.318 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 15:09:30.774: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan  4 15:09:30.914: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d3320a6c-495f-4081-ae16-85f823910ae7" in namespace "projected-3579" to be "success or failure"
Jan  4 15:09:30.922: INFO: Pod "downwardapi-volume-d3320a6c-495f-4081-ae16-85f823910ae7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.593926ms
Jan  4 15:09:32.931: INFO: Pod "downwardapi-volume-d3320a6c-495f-4081-ae16-85f823910ae7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016827057s
Jan  4 15:09:34.963: INFO: Pod "downwardapi-volume-d3320a6c-495f-4081-ae16-85f823910ae7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.049666331s
Jan  4 15:09:36.989: INFO: Pod "downwardapi-volume-d3320a6c-495f-4081-ae16-85f823910ae7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.075104079s
Jan  4 15:09:39.047: INFO: Pod "downwardapi-volume-d3320a6c-495f-4081-ae16-85f823910ae7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.133105913s
STEP: Saw pod success
Jan  4 15:09:39.047: INFO: Pod "downwardapi-volume-d3320a6c-495f-4081-ae16-85f823910ae7" satisfied condition "success or failure"
Jan  4 15:09:39.055: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-d3320a6c-495f-4081-ae16-85f823910ae7 container client-container: 
STEP: delete the pod
Jan  4 15:09:39.102: INFO: Waiting for pod downwardapi-volume-d3320a6c-495f-4081-ae16-85f823910ae7 to disappear
Jan  4 15:09:39.191: INFO: Pod downwardapi-volume-d3320a6c-495f-4081-ae16-85f823910ae7 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 15:09:39.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3579" for this suite.
Jan  4 15:09:45.215: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 15:09:45.335: INFO: namespace projected-3579 deletion completed in 6.136702199s

• [SLOW TEST:14.562 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 15:09:45.336: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-configmap-9q6d
STEP: Creating a pod to test atomic-volume-subpath
Jan  4 15:09:45.518: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-9q6d" in namespace "subpath-6958" to be "success or failure"
Jan  4 15:09:45.544: INFO: Pod "pod-subpath-test-configmap-9q6d": Phase="Pending", Reason="", readiness=false. Elapsed: 25.374858ms
Jan  4 15:09:47.549: INFO: Pod "pod-subpath-test-configmap-9q6d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030831772s
Jan  4 15:09:49.565: INFO: Pod "pod-subpath-test-configmap-9q6d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047304488s
Jan  4 15:09:51.579: INFO: Pod "pod-subpath-test-configmap-9q6d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.060366191s
Jan  4 15:09:53.586: INFO: Pod "pod-subpath-test-configmap-9q6d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.068073806s
Jan  4 15:09:55.882: INFO: Pod "pod-subpath-test-configmap-9q6d": Phase="Pending", Reason="", readiness=false. Elapsed: 10.364234514s
Jan  4 15:09:57.898: INFO: Pod "pod-subpath-test-configmap-9q6d": Phase="Running", Reason="", readiness=true. Elapsed: 12.379328757s
Jan  4 15:09:59.906: INFO: Pod "pod-subpath-test-configmap-9q6d": Phase="Running", Reason="", readiness=true. Elapsed: 14.3877574s
Jan  4 15:10:01.954: INFO: Pod "pod-subpath-test-configmap-9q6d": Phase="Running", Reason="", readiness=true. Elapsed: 16.435702672s
Jan  4 15:10:03.986: INFO: Pod "pod-subpath-test-configmap-9q6d": Phase="Running", Reason="", readiness=true. Elapsed: 18.468268576s
Jan  4 15:10:05.993: INFO: Pod "pod-subpath-test-configmap-9q6d": Phase="Running", Reason="", readiness=true. Elapsed: 20.475220448s
Jan  4 15:10:08.013: INFO: Pod "pod-subpath-test-configmap-9q6d": Phase="Running", Reason="", readiness=true. Elapsed: 22.49453444s
Jan  4 15:10:10.035: INFO: Pod "pod-subpath-test-configmap-9q6d": Phase="Running", Reason="", readiness=true. Elapsed: 24.516746909s
Jan  4 15:10:12.043: INFO: Pod "pod-subpath-test-configmap-9q6d": Phase="Running", Reason="", readiness=true. Elapsed: 26.524743816s
Jan  4 15:10:14.053: INFO: Pod "pod-subpath-test-configmap-9q6d": Phase="Running", Reason="", readiness=true. Elapsed: 28.534408077s
Jan  4 15:10:16.071: INFO: Pod "pod-subpath-test-configmap-9q6d": Phase="Running", Reason="", readiness=true. Elapsed: 30.552920575s
Jan  4 15:10:18.093: INFO: Pod "pod-subpath-test-configmap-9q6d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 32.574606448s
STEP: Saw pod success
Jan  4 15:10:18.093: INFO: Pod "pod-subpath-test-configmap-9q6d" satisfied condition "success or failure"
Jan  4 15:10:18.097: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-configmap-9q6d container test-container-subpath-configmap-9q6d: 
STEP: delete the pod
Jan  4 15:10:18.169: INFO: Waiting for pod pod-subpath-test-configmap-9q6d to disappear
Jan  4 15:10:18.279: INFO: Pod pod-subpath-test-configmap-9q6d no longer exists
STEP: Deleting pod pod-subpath-test-configmap-9q6d
Jan  4 15:10:18.279: INFO: Deleting pod "pod-subpath-test-configmap-9q6d" in namespace "subpath-6958"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 15:10:18.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-6958" for this suite.
Jan  4 15:10:24.375: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 15:10:24.548: INFO: namespace subpath-6958 deletion completed in 6.230531081s

• [SLOW TEST:39.212 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 15:10:24.550: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan  4 15:10:24.715: INFO: Waiting up to 5m0s for pod "downwardapi-volume-93c1cc72-f83b-4c96-902b-d601b658ad98" in namespace "downward-api-4803" to be "success or failure"
Jan  4 15:10:24.743: INFO: Pod "downwardapi-volume-93c1cc72-f83b-4c96-902b-d601b658ad98": Phase="Pending", Reason="", readiness=false. Elapsed: 28.454603ms
Jan  4 15:10:27.883: INFO: Pod "downwardapi-volume-93c1cc72-f83b-4c96-902b-d601b658ad98": Phase="Pending", Reason="", readiness=false. Elapsed: 3.16851364s
Jan  4 15:10:29.892: INFO: Pod "downwardapi-volume-93c1cc72-f83b-4c96-902b-d601b658ad98": Phase="Pending", Reason="", readiness=false. Elapsed: 5.177285472s
Jan  4 15:10:31.907: INFO: Pod "downwardapi-volume-93c1cc72-f83b-4c96-902b-d601b658ad98": Phase="Pending", Reason="", readiness=false. Elapsed: 7.192684166s
Jan  4 15:10:33.927: INFO: Pod "downwardapi-volume-93c1cc72-f83b-4c96-902b-d601b658ad98": Phase="Pending", Reason="", readiness=false. Elapsed: 9.212612279s
Jan  4 15:10:35.936: INFO: Pod "downwardapi-volume-93c1cc72-f83b-4c96-902b-d601b658ad98": Phase="Pending", Reason="", readiness=false. Elapsed: 11.221543329s
Jan  4 15:10:37.949: INFO: Pod "downwardapi-volume-93c1cc72-f83b-4c96-902b-d601b658ad98": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.234078328s
STEP: Saw pod success
Jan  4 15:10:37.949: INFO: Pod "downwardapi-volume-93c1cc72-f83b-4c96-902b-d601b658ad98" satisfied condition "success or failure"
Jan  4 15:10:37.961: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-93c1cc72-f83b-4c96-902b-d601b658ad98 container client-container: 
STEP: delete the pod
Jan  4 15:10:38.175: INFO: Waiting for pod downwardapi-volume-93c1cc72-f83b-4c96-902b-d601b658ad98 to disappear
Jan  4 15:10:38.183: INFO: Pod downwardapi-volume-93c1cc72-f83b-4c96-902b-d601b658ad98 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 15:10:38.184: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4803" for this suite.
Jan  4 15:10:44.273: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 15:10:44.424: INFO: namespace downward-api-4803 deletion completed in 6.231441583s

• [SLOW TEST:19.874 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 15:10:44.425: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Jan  4 15:10:44.571: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 15:10:59.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-4612" for this suite.
Jan  4 15:11:05.814: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 15:11:05.917: INFO: namespace init-container-4612 deletion completed in 6.189737218s

• [SLOW TEST:21.492 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 15:11:05.917: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan  4 15:11:06.082: INFO: Waiting up to 5m0s for pod "downwardapi-volume-eaaf16b1-babe-4f57-be1f-9af78af8bf67" in namespace "projected-5873" to be "success or failure"
Jan  4 15:11:06.101: INFO: Pod "downwardapi-volume-eaaf16b1-babe-4f57-be1f-9af78af8bf67": Phase="Pending", Reason="", readiness=false. Elapsed: 18.057413ms
Jan  4 15:11:08.166: INFO: Pod "downwardapi-volume-eaaf16b1-babe-4f57-be1f-9af78af8bf67": Phase="Pending", Reason="", readiness=false. Elapsed: 2.083488036s
Jan  4 15:11:10.186: INFO: Pod "downwardapi-volume-eaaf16b1-babe-4f57-be1f-9af78af8bf67": Phase="Pending", Reason="", readiness=false. Elapsed: 4.103093741s
Jan  4 15:11:12.202: INFO: Pod "downwardapi-volume-eaaf16b1-babe-4f57-be1f-9af78af8bf67": Phase="Pending", Reason="", readiness=false. Elapsed: 6.119387907s
Jan  4 15:11:14.220: INFO: Pod "downwardapi-volume-eaaf16b1-babe-4f57-be1f-9af78af8bf67": Phase="Pending", Reason="", readiness=false. Elapsed: 8.137133848s
Jan  4 15:11:16.230: INFO: Pod "downwardapi-volume-eaaf16b1-babe-4f57-be1f-9af78af8bf67": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.147321085s
STEP: Saw pod success
Jan  4 15:11:16.231: INFO: Pod "downwardapi-volume-eaaf16b1-babe-4f57-be1f-9af78af8bf67" satisfied condition "success or failure"
Jan  4 15:11:16.245: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-eaaf16b1-babe-4f57-be1f-9af78af8bf67 container client-container: 
STEP: delete the pod
Jan  4 15:11:16.401: INFO: Waiting for pod downwardapi-volume-eaaf16b1-babe-4f57-be1f-9af78af8bf67 to disappear
Jan  4 15:11:16.436: INFO: Pod downwardapi-volume-eaaf16b1-babe-4f57-be1f-9af78af8bf67 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 15:11:16.437: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5873" for this suite.
Jan  4 15:11:24.494: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 15:11:24.570: INFO: namespace projected-5873 deletion completed in 8.106572424s

• [SLOW TEST:18.653 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 15:11:24.572: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-map-645936b7-c08d-43b9-bb70-6e6edfa2a95c
STEP: Creating a pod to test consume secrets
Jan  4 15:11:24.823: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-7968a11c-6297-478f-b8f2-d56f3dc64412" in namespace "projected-1774" to be "success or failure"
Jan  4 15:11:24.829: INFO: Pod "pod-projected-secrets-7968a11c-6297-478f-b8f2-d56f3dc64412": Phase="Pending", Reason="", readiness=false. Elapsed: 6.574589ms
Jan  4 15:11:26.951: INFO: Pod "pod-projected-secrets-7968a11c-6297-478f-b8f2-d56f3dc64412": Phase="Pending", Reason="", readiness=false. Elapsed: 2.127816416s
Jan  4 15:11:29.212: INFO: Pod "pod-projected-secrets-7968a11c-6297-478f-b8f2-d56f3dc64412": Phase="Pending", Reason="", readiness=false. Elapsed: 4.389548422s
Jan  4 15:11:31.330: INFO: Pod "pod-projected-secrets-7968a11c-6297-478f-b8f2-d56f3dc64412": Phase="Pending", Reason="", readiness=false. Elapsed: 6.506829401s
Jan  4 15:11:33.577: INFO: Pod "pod-projected-secrets-7968a11c-6297-478f-b8f2-d56f3dc64412": Phase="Pending", Reason="", readiness=false. Elapsed: 8.754617669s
Jan  4 15:11:35.589: INFO: Pod "pod-projected-secrets-7968a11c-6297-478f-b8f2-d56f3dc64412": Phase="Pending", Reason="", readiness=false. Elapsed: 10.766326681s
Jan  4 15:11:37.605: INFO: Pod "pod-projected-secrets-7968a11c-6297-478f-b8f2-d56f3dc64412": Phase="Pending", Reason="", readiness=false. Elapsed: 12.781932151s
Jan  4 15:11:39.612: INFO: Pod "pod-projected-secrets-7968a11c-6297-478f-b8f2-d56f3dc64412": Phase="Pending", Reason="", readiness=false. Elapsed: 14.789396656s
Jan  4 15:11:41.620: INFO: Pod "pod-projected-secrets-7968a11c-6297-478f-b8f2-d56f3dc64412": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.797524773s
STEP: Saw pod success
Jan  4 15:11:41.620: INFO: Pod "pod-projected-secrets-7968a11c-6297-478f-b8f2-d56f3dc64412" satisfied condition "success or failure"
Jan  4 15:11:41.624: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-7968a11c-6297-478f-b8f2-d56f3dc64412 container projected-secret-volume-test: 
STEP: delete the pod
Jan  4 15:11:41.684: INFO: Waiting for pod pod-projected-secrets-7968a11c-6297-478f-b8f2-d56f3dc64412 to disappear
Jan  4 15:11:41.691: INFO: Pod pod-projected-secrets-7968a11c-6297-478f-b8f2-d56f3dc64412 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 15:11:41.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1774" for this suite.
Jan  4 15:11:47.721: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 15:11:47.938: INFO: namespace projected-1774 deletion completed in 6.24283213s

• [SLOW TEST:23.367 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 15:11:47.939: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: getting the auto-created API token
Jan  4 15:11:48.694: INFO: created pod pod-service-account-defaultsa
Jan  4 15:11:48.694: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Jan  4 15:11:48.763: INFO: created pod pod-service-account-mountsa
Jan  4 15:11:48.763: INFO: pod pod-service-account-mountsa service account token volume mount: true
Jan  4 15:11:48.789: INFO: created pod pod-service-account-nomountsa
Jan  4 15:11:48.789: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Jan  4 15:11:48.843: INFO: created pod pod-service-account-defaultsa-mountspec
Jan  4 15:11:48.843: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Jan  4 15:11:48.986: INFO: created pod pod-service-account-mountsa-mountspec
Jan  4 15:11:48.986: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Jan  4 15:11:48.999: INFO: created pod pod-service-account-nomountsa-mountspec
Jan  4 15:11:48.999: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Jan  4 15:11:50.858: INFO: created pod pod-service-account-defaultsa-nomountspec
Jan  4 15:11:50.859: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Jan  4 15:11:51.284: INFO: created pod pod-service-account-mountsa-nomountspec
Jan  4 15:11:51.284: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Jan  4 15:11:51.329: INFO: created pod pod-service-account-nomountsa-nomountspec
Jan  4 15:11:51.330: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 15:11:51.330: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-3349" for this suite.
Jan  4 15:12:41.448: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 15:12:41.587: INFO: namespace svcaccounts-3349 deletion completed in 50.008621447s

• [SLOW TEST:53.648 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 15:12:41.588: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-ccbcfb4f-b4c8-4dea-99b0-be5bd848d7d2
STEP: Creating a pod to test consume configMaps
Jan  4 15:12:41.728: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b8469ef7-7c6d-4951-8d62-d19b555c1155" in namespace "projected-8942" to be "success or failure"
Jan  4 15:12:41.745: INFO: Pod "pod-projected-configmaps-b8469ef7-7c6d-4951-8d62-d19b555c1155": Phase="Pending", Reason="", readiness=false. Elapsed: 17.234235ms
Jan  4 15:12:43.757: INFO: Pod "pod-projected-configmaps-b8469ef7-7c6d-4951-8d62-d19b555c1155": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028928016s
Jan  4 15:12:45.768: INFO: Pod "pod-projected-configmaps-b8469ef7-7c6d-4951-8d62-d19b555c1155": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039754854s
Jan  4 15:12:47.825: INFO: Pod "pod-projected-configmaps-b8469ef7-7c6d-4951-8d62-d19b555c1155": Phase="Pending", Reason="", readiness=false. Elapsed: 6.096542118s
Jan  4 15:12:49.838: INFO: Pod "pod-projected-configmaps-b8469ef7-7c6d-4951-8d62-d19b555c1155": Phase="Pending", Reason="", readiness=false. Elapsed: 8.109896975s
Jan  4 15:12:51.847: INFO: Pod "pod-projected-configmaps-b8469ef7-7c6d-4951-8d62-d19b555c1155": Phase="Pending", Reason="", readiness=false. Elapsed: 10.118755909s
Jan  4 15:12:53.904: INFO: Pod "pod-projected-configmaps-b8469ef7-7c6d-4951-8d62-d19b555c1155": Phase="Pending", Reason="", readiness=false. Elapsed: 12.176501198s
Jan  4 15:12:55.913: INFO: Pod "pod-projected-configmaps-b8469ef7-7c6d-4951-8d62-d19b555c1155": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.185242177s
STEP: Saw pod success
Jan  4 15:12:55.913: INFO: Pod "pod-projected-configmaps-b8469ef7-7c6d-4951-8d62-d19b555c1155" satisfied condition "success or failure"
Jan  4 15:12:55.918: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-b8469ef7-7c6d-4951-8d62-d19b555c1155 container projected-configmap-volume-test: 
STEP: delete the pod
Jan  4 15:12:56.338: INFO: Waiting for pod pod-projected-configmaps-b8469ef7-7c6d-4951-8d62-d19b555c1155 to disappear
Jan  4 15:12:56.348: INFO: Pod pod-projected-configmaps-b8469ef7-7c6d-4951-8d62-d19b555c1155 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 15:12:56.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8942" for this suite.
Jan  4 15:13:02.554: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 15:13:02.684: INFO: namespace projected-8942 deletion completed in 6.326307477s

• [SLOW TEST:21.096 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 15:13:02.684: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating replication controller my-hostname-basic-5a6ce7f9-69c8-4a5b-9559-25d1968efc8c
Jan  4 15:13:02.876: INFO: Pod name my-hostname-basic-5a6ce7f9-69c8-4a5b-9559-25d1968efc8c: Found 0 pods out of 1
Jan  4 15:13:07.928: INFO: Pod name my-hostname-basic-5a6ce7f9-69c8-4a5b-9559-25d1968efc8c: Found 1 pods out of 1
Jan  4 15:13:07.928: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-5a6ce7f9-69c8-4a5b-9559-25d1968efc8c" are running
Jan  4 15:13:12.259: INFO: Pod "my-hostname-basic-5a6ce7f9-69c8-4a5b-9559-25d1968efc8c-jmj2z" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-04 15:13:03 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-04 15:13:03 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-5a6ce7f9-69c8-4a5b-9559-25d1968efc8c]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-04 15:13:03 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-5a6ce7f9-69c8-4a5b-9559-25d1968efc8c]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-04 15:13:02 +0000 UTC Reason: Message:}])
Jan  4 15:13:12.260: INFO: Trying to dial the pod
Jan  4 15:13:17.776: INFO: Controller my-hostname-basic-5a6ce7f9-69c8-4a5b-9559-25d1968efc8c: Got expected result from replica 1 [my-hostname-basic-5a6ce7f9-69c8-4a5b-9559-25d1968efc8c-jmj2z]: "my-hostname-basic-5a6ce7f9-69c8-4a5b-9559-25d1968efc8c-jmj2z", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 15:13:17.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-8346" for this suite.
Jan  4 15:13:23.983: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 15:13:24.174: INFO: namespace replication-controller-8346 deletion completed in 6.39142278s

• [SLOW TEST:21.490 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 15:13:24.174: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-da2c267a-150f-4ede-94ca-f28fdcca64a8
STEP: Creating a pod to test consume configMaps
Jan  4 15:13:24.382: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ab3fd1bc-ea23-418e-9e74-9a030d70c1ac" in namespace "projected-7072" to be "success or failure"
Jan  4 15:13:24.458: INFO: Pod "pod-projected-configmaps-ab3fd1bc-ea23-418e-9e74-9a030d70c1ac": Phase="Pending", Reason="", readiness=false. Elapsed: 76.047634ms
Jan  4 15:13:26.465: INFO: Pod "pod-projected-configmaps-ab3fd1bc-ea23-418e-9e74-9a030d70c1ac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.083158507s
Jan  4 15:13:28.477: INFO: Pod "pod-projected-configmaps-ab3fd1bc-ea23-418e-9e74-9a030d70c1ac": Phase="Pending", Reason="", readiness=false. Elapsed: 4.094820368s
Jan  4 15:13:30.490: INFO: Pod "pod-projected-configmaps-ab3fd1bc-ea23-418e-9e74-9a030d70c1ac": Phase="Pending", Reason="", readiness=false. Elapsed: 6.107402881s
Jan  4 15:13:32.499: INFO: Pod "pod-projected-configmaps-ab3fd1bc-ea23-418e-9e74-9a030d70c1ac": Phase="Pending", Reason="", readiness=false. Elapsed: 8.116962808s
Jan  4 15:13:34.511: INFO: Pod "pod-projected-configmaps-ab3fd1bc-ea23-418e-9e74-9a030d70c1ac": Phase="Pending", Reason="", readiness=false. Elapsed: 10.128616001s
Jan  4 15:13:36.520: INFO: Pod "pod-projected-configmaps-ab3fd1bc-ea23-418e-9e74-9a030d70c1ac": Phase="Pending", Reason="", readiness=false. Elapsed: 12.137906024s
Jan  4 15:13:38.554: INFO: Pod "pod-projected-configmaps-ab3fd1bc-ea23-418e-9e74-9a030d70c1ac": Phase="Pending", Reason="", readiness=false. Elapsed: 14.171812036s
Jan  4 15:13:40.573: INFO: Pod "pod-projected-configmaps-ab3fd1bc-ea23-418e-9e74-9a030d70c1ac": Phase="Pending", Reason="", readiness=false. Elapsed: 16.191206302s
Jan  4 15:13:42.596: INFO: Pod "pod-projected-configmaps-ab3fd1bc-ea23-418e-9e74-9a030d70c1ac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.214344747s
STEP: Saw pod success
Jan  4 15:13:42.597: INFO: Pod "pod-projected-configmaps-ab3fd1bc-ea23-418e-9e74-9a030d70c1ac" satisfied condition "success or failure"
Jan  4 15:13:42.609: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-ab3fd1bc-ea23-418e-9e74-9a030d70c1ac container projected-configmap-volume-test: 
STEP: delete the pod
Jan  4 15:13:42.780: INFO: Waiting for pod pod-projected-configmaps-ab3fd1bc-ea23-418e-9e74-9a030d70c1ac to disappear
Jan  4 15:13:42.787: INFO: Pod pod-projected-configmaps-ab3fd1bc-ea23-418e-9e74-9a030d70c1ac no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 15:13:42.788: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7072" for this suite.
Jan  4 15:13:50.816: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 15:13:50.932: INFO: namespace projected-7072 deletion completed in 8.138784163s

• [SLOW TEST:26.758 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 15:13:50.932: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir volume type on tmpfs
Jan  4 15:13:51.180: INFO: Waiting up to 5m0s for pod "pod-59008e6c-5fd4-4011-854d-831b5a0cde51" in namespace "emptydir-9544" to be "success or failure"
Jan  4 15:13:51.310: INFO: Pod "pod-59008e6c-5fd4-4011-854d-831b5a0cde51": Phase="Pending", Reason="", readiness=false. Elapsed: 130.236997ms
Jan  4 15:13:53.367: INFO: Pod "pod-59008e6c-5fd4-4011-854d-831b5a0cde51": Phase="Pending", Reason="", readiness=false. Elapsed: 2.187459999s
Jan  4 15:13:55.376: INFO: Pod "pod-59008e6c-5fd4-4011-854d-831b5a0cde51": Phase="Pending", Reason="", readiness=false. Elapsed: 4.196323166s
Jan  4 15:13:57.381: INFO: Pod "pod-59008e6c-5fd4-4011-854d-831b5a0cde51": Phase="Pending", Reason="", readiness=false. Elapsed: 6.201323881s
Jan  4 15:13:59.960: INFO: Pod "pod-59008e6c-5fd4-4011-854d-831b5a0cde51": Phase="Pending", Reason="", readiness=false. Elapsed: 8.780307335s
Jan  4 15:14:01.968: INFO: Pod "pod-59008e6c-5fd4-4011-854d-831b5a0cde51": Phase="Pending", Reason="", readiness=false. Elapsed: 10.7886618s
Jan  4 15:14:03.978: INFO: Pod "pod-59008e6c-5fd4-4011-854d-831b5a0cde51": Phase="Pending", Reason="", readiness=false. Elapsed: 12.79842421s
Jan  4 15:14:05.989: INFO: Pod "pod-59008e6c-5fd4-4011-854d-831b5a0cde51": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.809408386s
STEP: Saw pod success
Jan  4 15:14:05.989: INFO: Pod "pod-59008e6c-5fd4-4011-854d-831b5a0cde51" satisfied condition "success or failure"
Jan  4 15:14:05.991: INFO: Trying to get logs from node iruya-node pod pod-59008e6c-5fd4-4011-854d-831b5a0cde51 container test-container: 
STEP: delete the pod
Jan  4 15:14:06.075: INFO: Waiting for pod pod-59008e6c-5fd4-4011-854d-831b5a0cde51 to disappear
Jan  4 15:14:06.158: INFO: Pod pod-59008e6c-5fd4-4011-854d-831b5a0cde51 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 15:14:06.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9544" for this suite.
Jan  4 15:14:12.226: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 15:14:12.402: INFO: namespace emptydir-9544 deletion completed in 6.230519867s

• [SLOW TEST:21.470 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 15:14:12.404: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: validating cluster-info
Jan  4 15:14:12.579: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info'
Jan  4 15:14:15.147: INFO: stderr: ""
Jan  4 15:14:15.147: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.57:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.57:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 15:14:15.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7410" for this suite.
Jan  4 15:14:21.193: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 15:14:21.328: INFO: namespace kubectl-7410 deletion completed in 6.169715733s

• [SLOW TEST:8.925 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl cluster-info
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if Kubernetes master services is included in cluster-info  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 15:14:21.329: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4696.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-4696.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4696.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-4696.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-4696.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-4696.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-4696.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-4696.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-4696.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-4696.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-4696.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-4696.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4696.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 7.9.97.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.97.9.7_udp@PTR;check="$$(dig +tcp +noall +answer +search 7.9.97.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.97.9.7_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4696.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-4696.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4696.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-4696.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-4696.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-4696.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-4696.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-4696.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-4696.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-4696.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-4696.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-4696.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4696.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 7.9.97.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.97.9.7_udp@PTR;check="$$(dig +tcp +noall +answer +search 7.9.97.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.97.9.7_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan  4 15:14:42.737: INFO: Unable to read jessie_udp@dns-test-service.dns-4696.svc.cluster.local from pod dns-4696/dns-test-53042605-afca-4672-8c60-187454b817e8: the server could not find the requested resource (get pods dns-test-53042605-afca-4672-8c60-187454b817e8)
Jan  4 15:14:42.772: INFO: Unable to read jessie_tcp@dns-test-service.dns-4696.svc.cluster.local from pod dns-4696/dns-test-53042605-afca-4672-8c60-187454b817e8: the server could not find the requested resource (get pods dns-test-53042605-afca-4672-8c60-187454b817e8)
Jan  4 15:14:42.792: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4696.svc.cluster.local from pod dns-4696/dns-test-53042605-afca-4672-8c60-187454b817e8: the server could not find the requested resource (get pods dns-test-53042605-afca-4672-8c60-187454b817e8)
Jan  4 15:14:42.808: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4696.svc.cluster.local from pod dns-4696/dns-test-53042605-afca-4672-8c60-187454b817e8: the server could not find the requested resource (get pods dns-test-53042605-afca-4672-8c60-187454b817e8)
Jan  4 15:14:42.818: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.dns-4696.svc.cluster.local from pod dns-4696/dns-test-53042605-afca-4672-8c60-187454b817e8: the server could not find the requested resource (get pods dns-test-53042605-afca-4672-8c60-187454b817e8)
Jan  4 15:14:42.840: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.dns-4696.svc.cluster.local from pod dns-4696/dns-test-53042605-afca-4672-8c60-187454b817e8: the server could not find the requested resource (get pods dns-test-53042605-afca-4672-8c60-187454b817e8)
Jan  4 15:14:42.901: INFO: Unable to read jessie_udp@PodARecord from pod dns-4696/dns-test-53042605-afca-4672-8c60-187454b817e8: the server could not find the requested resource (get pods dns-test-53042605-afca-4672-8c60-187454b817e8)
Jan  4 15:14:42.945: INFO: Unable to read jessie_tcp@PodARecord from pod dns-4696/dns-test-53042605-afca-4672-8c60-187454b817e8: the server could not find the requested resource (get pods dns-test-53042605-afca-4672-8c60-187454b817e8)
Jan  4 15:14:42.982: INFO: Lookups using dns-4696/dns-test-53042605-afca-4672-8c60-187454b817e8 failed for: [jessie_udp@dns-test-service.dns-4696.svc.cluster.local jessie_tcp@dns-test-service.dns-4696.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4696.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4696.svc.cluster.local jessie_udp@_http._tcp.test-service-2.dns-4696.svc.cluster.local jessie_tcp@_http._tcp.test-service-2.dns-4696.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord]

Jan  4 15:14:48.085: INFO: DNS probes using dns-4696/dns-test-53042605-afca-4672-8c60-187454b817e8 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 15:14:48.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-4696" for this suite.
Jan  4 15:14:56.613: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 15:14:56.746: INFO: namespace dns-4696 deletion completed in 8.245878976s

• [SLOW TEST:35.417 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 15:14:56.747: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan  4 15:14:56.920: INFO: Creating deployment "test-recreate-deployment"
Jan  4 15:14:56.929: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Jan  4 15:14:56.964: INFO: deployment "test-recreate-deployment" doesn't have the required revision set
Jan  4 15:14:59.000: INFO: Waiting deployment "test-recreate-deployment" to complete
Jan  4 15:14:59.003: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713747697, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713747697, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713747697, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713747696, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  4 15:15:01.014: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713747697, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713747697, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713747697, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713747696, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  4 15:15:03.096: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713747697, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713747697, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713747697, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713747696, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  4 15:15:05.016: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713747697, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713747697, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713747697, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713747696, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  4 15:15:07.008: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Jan  4 15:15:07.015: INFO: Updating deployment test-recreate-deployment
Jan  4 15:15:07.015: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Jan  4 15:15:08.060: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:deployment-4413,SelfLink:/apis/apps/v1/namespaces/deployment-4413/deployments/test-recreate-deployment,UID:126afa1f-ed62-44c1-94e7-4366db02eec0,ResourceVersion:19285198,Generation:2,CreationTimestamp:2020-01-04 15:14:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-01-04 15:15:07 +0000 UTC 2020-01-04 15:15:07 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-01-04 15:15:08 +0000 UTC 2020-01-04 15:14:56 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-5c8c9cc69d" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},}

Jan  4 15:15:08.081: INFO: New ReplicaSet "test-recreate-deployment-5c8c9cc69d" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d,GenerateName:,Namespace:deployment-4413,SelfLink:/apis/apps/v1/namespaces/deployment-4413/replicasets/test-recreate-deployment-5c8c9cc69d,UID:6b3b1201-5f5b-4488-9ecd-f1897b2fe6bd,ResourceVersion:19285196,Generation:1,CreationTimestamp:2020-01-04 15:15:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 126afa1f-ed62-44c1-94e7-4366db02eec0 0xc002349387 0xc002349388}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan  4 15:15:08.081: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Jan  4 15:15:08.082: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-6df85df6b9,GenerateName:,Namespace:deployment-4413,SelfLink:/apis/apps/v1/namespaces/deployment-4413/replicasets/test-recreate-deployment-6df85df6b9,UID:0bb22635-388c-4a7c-9c04-13d7c5f49674,ResourceVersion:19285186,Generation:2,CreationTimestamp:2020-01-04 15:14:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 126afa1f-ed62-44c1-94e7-4366db02eec0 0xc002349457 0xc002349458}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan  4 15:15:08.089: INFO: Pod "test-recreate-deployment-5c8c9cc69d-b9jbx" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d-b9jbx,GenerateName:test-recreate-deployment-5c8c9cc69d-,Namespace:deployment-4413,SelfLink:/api/v1/namespaces/deployment-4413/pods/test-recreate-deployment-5c8c9cc69d-b9jbx,UID:13117641-3956-4c0a-b08d-377c8ba9880b,ResourceVersion:19285199,Generation:0,CreationTimestamp:2020-01-04 15:15:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-5c8c9cc69d 6b3b1201-5f5b-4488-9ecd-f1897b2fe6bd 0xc0030a7987 0xc0030a7988}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-htf4j {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-htf4j,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-htf4j true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0030a7a20} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0030a7a40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 15:15:07 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 15:15:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-04 15:15:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 15:15:07 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-01-04 15:15:07 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 15:15:08.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-4413" for this suite.
Jan  4 15:15:16.146: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 15:15:16.219: INFO: namespace deployment-4413 deletion completed in 8.122569763s

• [SLOW TEST:19.473 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 15:15:16.220: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-secret-kq2c
STEP: Creating a pod to test atomic-volume-subpath
Jan  4 15:15:16.341: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-kq2c" in namespace "subpath-9555" to be "success or failure"
Jan  4 15:15:16.349: INFO: Pod "pod-subpath-test-secret-kq2c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.081294ms
Jan  4 15:15:18.355: INFO: Pod "pod-subpath-test-secret-kq2c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014388914s
Jan  4 15:15:21.070: INFO: Pod "pod-subpath-test-secret-kq2c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.728479791s
Jan  4 15:15:23.083: INFO: Pod "pod-subpath-test-secret-kq2c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.741857253s
Jan  4 15:15:25.090: INFO: Pod "pod-subpath-test-secret-kq2c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.748509048s
Jan  4 15:15:27.097: INFO: Pod "pod-subpath-test-secret-kq2c": Phase="Pending", Reason="", readiness=false. Elapsed: 10.755430723s
Jan  4 15:15:29.218: INFO: Pod "pod-subpath-test-secret-kq2c": Phase="Running", Reason="", readiness=true. Elapsed: 12.877361455s
Jan  4 15:15:31.227: INFO: Pod "pod-subpath-test-secret-kq2c": Phase="Running", Reason="", readiness=true. Elapsed: 14.885518512s
Jan  4 15:15:33.236: INFO: Pod "pod-subpath-test-secret-kq2c": Phase="Running", Reason="", readiness=true. Elapsed: 16.895408249s
Jan  4 15:15:35.252: INFO: Pod "pod-subpath-test-secret-kq2c": Phase="Running", Reason="", readiness=true. Elapsed: 18.910753588s
Jan  4 15:15:37.262: INFO: Pod "pod-subpath-test-secret-kq2c": Phase="Running", Reason="", readiness=true. Elapsed: 20.921103249s
Jan  4 15:15:39.275: INFO: Pod "pod-subpath-test-secret-kq2c": Phase="Running", Reason="", readiness=true. Elapsed: 22.933982346s
Jan  4 15:15:41.280: INFO: Pod "pod-subpath-test-secret-kq2c": Phase="Running", Reason="", readiness=true. Elapsed: 24.939415492s
Jan  4 15:15:43.290: INFO: Pod "pod-subpath-test-secret-kq2c": Phase="Running", Reason="", readiness=true. Elapsed: 26.949261263s
Jan  4 15:15:45.299: INFO: Pod "pod-subpath-test-secret-kq2c": Phase="Running", Reason="", readiness=true. Elapsed: 28.958230915s
Jan  4 15:15:47.311: INFO: Pod "pod-subpath-test-secret-kq2c": Phase="Running", Reason="", readiness=true. Elapsed: 30.96942486s
Jan  4 15:15:49.318: INFO: Pod "pod-subpath-test-secret-kq2c": Phase="Running", Reason="", readiness=true. Elapsed: 32.976815903s
Jan  4 15:15:51.323: INFO: Pod "pod-subpath-test-secret-kq2c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 34.982424117s
STEP: Saw pod success
Jan  4 15:15:51.324: INFO: Pod "pod-subpath-test-secret-kq2c" satisfied condition "success or failure"
Jan  4 15:15:51.326: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-secret-kq2c container test-container-subpath-secret-kq2c: 
STEP: delete the pod
Jan  4 15:15:51.787: INFO: Waiting for pod pod-subpath-test-secret-kq2c to disappear
Jan  4 15:15:51.811: INFO: Pod pod-subpath-test-secret-kq2c no longer exists
STEP: Deleting pod pod-subpath-test-secret-kq2c
Jan  4 15:15:51.812: INFO: Deleting pod "pod-subpath-test-secret-kq2c" in namespace "subpath-9555"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 15:15:51.822: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-9555" for this suite.
Jan  4 15:15:59.851: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 15:16:00.008: INFO: namespace subpath-9555 deletion completed in 8.181045947s

• [SLOW TEST:43.788 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with secret pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl label 
  should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 15:16:00.010: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1210
STEP: creating the pod
Jan  4 15:16:00.122: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4512'
Jan  4 15:16:00.477: INFO: stderr: ""
Jan  4 15:16:00.477: INFO: stdout: "pod/pause created\n"
Jan  4 15:16:00.477: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
Jan  4 15:16:00.477: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-4512" to be "running and ready"
Jan  4 15:16:00.509: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 31.768388ms
Jan  4 15:16:02.520: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043146132s
Jan  4 15:16:04.536: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.059142977s
Jan  4 15:16:06.548: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.070456907s
Jan  4 15:16:08.555: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 8.077436085s
Jan  4 15:16:10.566: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 10.088679478s
Jan  4 15:16:10.566: INFO: Pod "pause" satisfied condition "running and ready"
Jan  4 15:16:10.566: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: adding the label testing-label with value testing-label-value to a pod
Jan  4 15:16:10.566: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-4512'
Jan  4 15:16:11.158: INFO: stderr: ""
Jan  4 15:16:11.158: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
Jan  4 15:16:11.159: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-4512'
Jan  4 15:16:11.300: INFO: stderr: ""
Jan  4 15:16:11.300: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          11s   testing-label-value\n"
STEP: removing the label testing-label of a pod
Jan  4 15:16:11.300: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-4512'
Jan  4 15:16:11.398: INFO: stderr: ""
Jan  4 15:16:11.398: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
Jan  4 15:16:11.399: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-4512'
Jan  4 15:16:11.464: INFO: stderr: ""
Jan  4 15:16:11.464: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          11s   \n"
[AfterEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1217
STEP: using delete to clean up resources
Jan  4 15:16:11.464: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4512'
Jan  4 15:16:11.607: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  4 15:16:11.607: INFO: stdout: "pod \"pause\" force deleted\n"
Jan  4 15:16:11.608: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-4512'
Jan  4 15:16:11.688: INFO: stderr: "No resources found.\n"
Jan  4 15:16:11.689: INFO: stdout: ""
Jan  4 15:16:11.689: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-4512 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan  4 15:16:11.778: INFO: stderr: ""
Jan  4 15:16:11.778: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 15:16:11.778: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4512" for this suite.
Jan  4 15:16:17.814: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 15:16:17.951: INFO: namespace kubectl-4512 deletion completed in 6.164892843s

• [SLOW TEST:17.941 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should update the label on a resource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 15:16:17.951: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on node default medium
Jan  4 15:16:18.182: INFO: Waiting up to 5m0s for pod "pod-8d7236f0-5744-42bb-b386-d7db45c69b2f" in namespace "emptydir-7766" to be "success or failure"
Jan  4 15:16:18.190: INFO: Pod "pod-8d7236f0-5744-42bb-b386-d7db45c69b2f": Phase="Pending", Reason="", readiness=false. Elapsed: 7.685789ms
Jan  4 15:16:20.212: INFO: Pod "pod-8d7236f0-5744-42bb-b386-d7db45c69b2f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029365407s
Jan  4 15:16:22.223: INFO: Pod "pod-8d7236f0-5744-42bb-b386-d7db45c69b2f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040357292s
Jan  4 15:16:24.229: INFO: Pod "pod-8d7236f0-5744-42bb-b386-d7db45c69b2f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.046826389s
Jan  4 15:16:26.249: INFO: Pod "pod-8d7236f0-5744-42bb-b386-d7db45c69b2f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.066735735s
Jan  4 15:16:28.259: INFO: Pod "pod-8d7236f0-5744-42bb-b386-d7db45c69b2f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.076595962s
Jan  4 15:16:30.272: INFO: Pod "pod-8d7236f0-5744-42bb-b386-d7db45c69b2f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.089568777s
STEP: Saw pod success
Jan  4 15:16:30.272: INFO: Pod "pod-8d7236f0-5744-42bb-b386-d7db45c69b2f" satisfied condition "success or failure"
Jan  4 15:16:30.277: INFO: Trying to get logs from node iruya-node pod pod-8d7236f0-5744-42bb-b386-d7db45c69b2f container test-container: 
STEP: delete the pod
Jan  4 15:16:30.353: INFO: Waiting for pod pod-8d7236f0-5744-42bb-b386-d7db45c69b2f to disappear
Jan  4 15:16:30.359: INFO: Pod pod-8d7236f0-5744-42bb-b386-d7db45c69b2f no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 15:16:30.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7766" for this suite.
Jan  4 15:16:36.503: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 15:16:36.595: INFO: namespace emptydir-7766 deletion completed in 6.230866202s

• [SLOW TEST:18.644 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 15:16:36.595: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan  4 15:16:36.635: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 15:16:37.811: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-5683" for this suite.
Jan  4 15:16:43.883: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 15:16:44.074: INFO: namespace custom-resource-definition-5683 deletion completed in 6.257187374s

• [SLOW TEST:7.479 seconds]
[sig-api-machinery] CustomResourceDefinition resources
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Simple CustomResourceDefinition
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35
    creating/deleting custom resource definition objects works  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 15:16:44.076: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on node default medium
Jan  4 15:16:44.158: INFO: Waiting up to 5m0s for pod "pod-99ea5d35-fc9d-4c61-957e-a9f8d03480d7" in namespace "emptydir-7037" to be "success or failure"
Jan  4 15:16:44.202: INFO: Pod "pod-99ea5d35-fc9d-4c61-957e-a9f8d03480d7": Phase="Pending", Reason="", readiness=false. Elapsed: 43.32229ms
Jan  4 15:16:46.213: INFO: Pod "pod-99ea5d35-fc9d-4c61-957e-a9f8d03480d7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054258519s
Jan  4 15:16:48.223: INFO: Pod "pod-99ea5d35-fc9d-4c61-957e-a9f8d03480d7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.064381816s
Jan  4 15:16:50.232: INFO: Pod "pod-99ea5d35-fc9d-4c61-957e-a9f8d03480d7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.073040733s
Jan  4 15:16:52.240: INFO: Pod "pod-99ea5d35-fc9d-4c61-957e-a9f8d03480d7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.081713629s
Jan  4 15:16:54.276: INFO: Pod "pod-99ea5d35-fc9d-4c61-957e-a9f8d03480d7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.116877524s
STEP: Saw pod success
Jan  4 15:16:54.276: INFO: Pod "pod-99ea5d35-fc9d-4c61-957e-a9f8d03480d7" satisfied condition "success or failure"
Jan  4 15:16:54.286: INFO: Trying to get logs from node iruya-node pod pod-99ea5d35-fc9d-4c61-957e-a9f8d03480d7 container test-container: 
STEP: delete the pod
Jan  4 15:16:54.418: INFO: Waiting for pod pod-99ea5d35-fc9d-4c61-957e-a9f8d03480d7 to disappear
Jan  4 15:16:54.423: INFO: Pod pod-99ea5d35-fc9d-4c61-957e-a9f8d03480d7 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 15:16:54.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7037" for this suite.
Jan  4 15:17:00.453: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 15:17:00.578: INFO: namespace emptydir-7037 deletion completed in 6.149565413s

• [SLOW TEST:16.502 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 15:17:00.581: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan  4 15:17:00.726: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b54822e8-a78f-4fe6-b8a7-e70e1a378f56" in namespace "downward-api-2523" to be "success or failure"
Jan  4 15:17:00.751: INFO: Pod "downwardapi-volume-b54822e8-a78f-4fe6-b8a7-e70e1a378f56": Phase="Pending", Reason="", readiness=false. Elapsed: 24.724976ms
Jan  4 15:17:02.758: INFO: Pod "downwardapi-volume-b54822e8-a78f-4fe6-b8a7-e70e1a378f56": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032157875s
Jan  4 15:17:04.768: INFO: Pod "downwardapi-volume-b54822e8-a78f-4fe6-b8a7-e70e1a378f56": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041812444s
Jan  4 15:17:06.775: INFO: Pod "downwardapi-volume-b54822e8-a78f-4fe6-b8a7-e70e1a378f56": Phase="Pending", Reason="", readiness=false. Elapsed: 6.049128272s
Jan  4 15:17:08.798: INFO: Pod "downwardapi-volume-b54822e8-a78f-4fe6-b8a7-e70e1a378f56": Phase="Pending", Reason="", readiness=false. Elapsed: 8.071578657s
Jan  4 15:17:10.808: INFO: Pod "downwardapi-volume-b54822e8-a78f-4fe6-b8a7-e70e1a378f56": Phase="Pending", Reason="", readiness=false. Elapsed: 10.082249334s
Jan  4 15:17:12.815: INFO: Pod "downwardapi-volume-b54822e8-a78f-4fe6-b8a7-e70e1a378f56": Phase="Pending", Reason="", readiness=false. Elapsed: 12.088621134s
Jan  4 15:17:14.822: INFO: Pod "downwardapi-volume-b54822e8-a78f-4fe6-b8a7-e70e1a378f56": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.096292901s
STEP: Saw pod success
Jan  4 15:17:14.822: INFO: Pod "downwardapi-volume-b54822e8-a78f-4fe6-b8a7-e70e1a378f56" satisfied condition "success or failure"
Jan  4 15:17:14.826: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-b54822e8-a78f-4fe6-b8a7-e70e1a378f56 container client-container: 
STEP: delete the pod
Jan  4 15:17:14.882: INFO: Waiting for pod downwardapi-volume-b54822e8-a78f-4fe6-b8a7-e70e1a378f56 to disappear
Jan  4 15:17:14.902: INFO: Pod downwardapi-volume-b54822e8-a78f-4fe6-b8a7-e70e1a378f56 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 15:17:14.903: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2523" for this suite.
Jan  4 15:17:21.018: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 15:17:21.101: INFO: namespace downward-api-2523 deletion completed in 6.191125867s

• [SLOW TEST:20.521 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 15:17:21.102: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on tmpfs
Jan  4 15:17:21.239: INFO: Waiting up to 5m0s for pod "pod-c9c066bb-d578-4250-be29-5b4058931b34" in namespace "emptydir-2193" to be "success or failure"
Jan  4 15:17:21.275: INFO: Pod "pod-c9c066bb-d578-4250-be29-5b4058931b34": Phase="Pending", Reason="", readiness=false. Elapsed: 36.043685ms
Jan  4 15:17:23.282: INFO: Pod "pod-c9c066bb-d578-4250-be29-5b4058931b34": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043679581s
Jan  4 15:17:25.290: INFO: Pod "pod-c9c066bb-d578-4250-be29-5b4058931b34": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050963722s
Jan  4 15:17:27.296: INFO: Pod "pod-c9c066bb-d578-4250-be29-5b4058931b34": Phase="Pending", Reason="", readiness=false. Elapsed: 6.0569355s
Jan  4 15:17:29.303: INFO: Pod "pod-c9c066bb-d578-4250-be29-5b4058931b34": Phase="Pending", Reason="", readiness=false. Elapsed: 8.06450118s
Jan  4 15:17:31.573: INFO: Pod "pod-c9c066bb-d578-4250-be29-5b4058931b34": Phase="Pending", Reason="", readiness=false. Elapsed: 10.334840403s
Jan  4 15:17:33.579: INFO: Pod "pod-c9c066bb-d578-4250-be29-5b4058931b34": Phase="Pending", Reason="", readiness=false. Elapsed: 12.340124119s
Jan  4 15:17:35.586: INFO: Pod "pod-c9c066bb-d578-4250-be29-5b4058931b34": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.346918739s
STEP: Saw pod success
Jan  4 15:17:35.586: INFO: Pod "pod-c9c066bb-d578-4250-be29-5b4058931b34" satisfied condition "success or failure"
Jan  4 15:17:35.590: INFO: Trying to get logs from node iruya-node pod pod-c9c066bb-d578-4250-be29-5b4058931b34 container test-container: 
STEP: delete the pod
Jan  4 15:17:35.769: INFO: Waiting for pod pod-c9c066bb-d578-4250-be29-5b4058931b34 to disappear
Jan  4 15:17:35.783: INFO: Pod pod-c9c066bb-d578-4250-be29-5b4058931b34 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 15:17:35.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2193" for this suite.
Jan  4 15:17:41.831: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 15:17:41.982: INFO: namespace emptydir-2193 deletion completed in 6.174714s

• [SLOW TEST:20.880 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 15:17:41.982: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Jan  4 15:17:42.170: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-5275,SelfLink:/api/v1/namespaces/watch-5275/configmaps/e2e-watch-test-label-changed,UID:7504b6a3-c69a-412f-90da-eda6a6168688,ResourceVersion:19285592,Generation:0,CreationTimestamp:2020-01-04 15:17:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan  4 15:17:42.171: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-5275,SelfLink:/api/v1/namespaces/watch-5275/configmaps/e2e-watch-test-label-changed,UID:7504b6a3-c69a-412f-90da-eda6a6168688,ResourceVersion:19285593,Generation:0,CreationTimestamp:2020-01-04 15:17:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Jan  4 15:17:42.171: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-5275,SelfLink:/api/v1/namespaces/watch-5275/configmaps/e2e-watch-test-label-changed,UID:7504b6a3-c69a-412f-90da-eda6a6168688,ResourceVersion:19285594,Generation:0,CreationTimestamp:2020-01-04 15:17:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Jan  4 15:17:52.224: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-5275,SelfLink:/api/v1/namespaces/watch-5275/configmaps/e2e-watch-test-label-changed,UID:7504b6a3-c69a-412f-90da-eda6a6168688,ResourceVersion:19285610,Generation:0,CreationTimestamp:2020-01-04 15:17:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan  4 15:17:52.225: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-5275,SelfLink:/api/v1/namespaces/watch-5275/configmaps/e2e-watch-test-label-changed,UID:7504b6a3-c69a-412f-90da-eda6a6168688,ResourceVersion:19285611,Generation:0,CreationTimestamp:2020-01-04 15:17:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
Jan  4 15:17:52.225: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-5275,SelfLink:/api/v1/namespaces/watch-5275/configmaps/e2e-watch-test-label-changed,UID:7504b6a3-c69a-412f-90da-eda6a6168688,ResourceVersion:19285612,Generation:0,CreationTimestamp:2020-01-04 15:17:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 15:17:52.225: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-5275" for this suite.
Jan  4 15:17:58.275: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 15:17:58.391: INFO: namespace watch-5275 deletion completed in 6.16065548s

• [SLOW TEST:16.409 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-network] DNS 
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 15:17:58.391: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-3808.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-3808.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3808.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-3808.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-3808.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3808.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe /etc/hosts
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan  4 15:18:20.642: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.dns-3808.svc.cluster.local from pod dns-3808/dns-test-68e69c6b-3ec9-481b-a0f7-0b8b2dce17d2: the server could not find the requested resource (get pods dns-test-68e69c6b-3ec9-481b-a0f7-0b8b2dce17d2)
Jan  4 15:18:20.647: INFO: Unable to read jessie_hosts@dns-querier-1 from pod dns-3808/dns-test-68e69c6b-3ec9-481b-a0f7-0b8b2dce17d2: the server could not find the requested resource (get pods dns-test-68e69c6b-3ec9-481b-a0f7-0b8b2dce17d2)
Jan  4 15:18:20.650: INFO: Unable to read jessie_udp@PodARecord from pod dns-3808/dns-test-68e69c6b-3ec9-481b-a0f7-0b8b2dce17d2: the server could not find the requested resource (get pods dns-test-68e69c6b-3ec9-481b-a0f7-0b8b2dce17d2)
Jan  4 15:18:20.656: INFO: Unable to read jessie_tcp@PodARecord from pod dns-3808/dns-test-68e69c6b-3ec9-481b-a0f7-0b8b2dce17d2: the server could not find the requested resource (get pods dns-test-68e69c6b-3ec9-481b-a0f7-0b8b2dce17d2)
Jan  4 15:18:20.656: INFO: Lookups using dns-3808/dns-test-68e69c6b-3ec9-481b-a0f7-0b8b2dce17d2 failed for: [jessie_hosts@dns-querier-1.dns-test-service.dns-3808.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord]

Jan  4 15:18:25.702: INFO: DNS probes using dns-3808/dns-test-68e69c6b-3ec9-481b-a0f7-0b8b2dce17d2 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 15:18:25.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-3808" for this suite.
Jan  4 15:18:31.900: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 15:18:32.014: INFO: namespace dns-3808 deletion completed in 6.246342608s

• [SLOW TEST:33.623 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 15:18:32.015: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Jan  4 15:18:32.249: INFO: Pod name pod-release: Found 0 pods out of 1
Jan  4 15:18:37.268: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 15:18:38.307: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-1347" for this suite.
Jan  4 15:18:44.458: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 15:18:44.584: INFO: namespace replication-controller-1347 deletion completed in 6.267212256s

• [SLOW TEST:12.569 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 15:18:44.584: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan  4 15:19:12.826: INFO: Container started at 2020-01-04 15:18:54 +0000 UTC, pod became ready at 2020-01-04 15:19:11 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 15:19:12.826: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-1336" for this suite.
Jan  4 15:19:34.864: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 15:19:34.984: INFO: namespace container-probe-1336 deletion completed in 22.153389176s

• [SLOW TEST:50.400 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 15:19:34.985: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Jan  4 15:19:35.181: INFO: Waiting up to 5m0s for pod "downward-api-3ab04634-02a9-4c53-8b5a-6bf7958b6d1f" in namespace "downward-api-8566" to be "success or failure"
Jan  4 15:19:35.293: INFO: Pod "downward-api-3ab04634-02a9-4c53-8b5a-6bf7958b6d1f": Phase="Pending", Reason="", readiness=false. Elapsed: 112.381261ms
Jan  4 15:19:37.304: INFO: Pod "downward-api-3ab04634-02a9-4c53-8b5a-6bf7958b6d1f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.12256503s
Jan  4 15:19:39.317: INFO: Pod "downward-api-3ab04634-02a9-4c53-8b5a-6bf7958b6d1f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.136436448s
Jan  4 15:19:41.323: INFO: Pod "downward-api-3ab04634-02a9-4c53-8b5a-6bf7958b6d1f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.14223941s
Jan  4 15:19:43.336: INFO: Pod "downward-api-3ab04634-02a9-4c53-8b5a-6bf7958b6d1f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.155312969s
Jan  4 15:19:45.344: INFO: Pod "downward-api-3ab04634-02a9-4c53-8b5a-6bf7958b6d1f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.162809825s
Jan  4 15:19:47.372: INFO: Pod "downward-api-3ab04634-02a9-4c53-8b5a-6bf7958b6d1f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.190666219s
STEP: Saw pod success
Jan  4 15:19:47.373: INFO: Pod "downward-api-3ab04634-02a9-4c53-8b5a-6bf7958b6d1f" satisfied condition "success or failure"
Jan  4 15:19:47.389: INFO: Trying to get logs from node iruya-node pod downward-api-3ab04634-02a9-4c53-8b5a-6bf7958b6d1f container dapi-container: 
STEP: delete the pod
Jan  4 15:19:47.542: INFO: Waiting for pod downward-api-3ab04634-02a9-4c53-8b5a-6bf7958b6d1f to disappear
Jan  4 15:19:47.563: INFO: Pod downward-api-3ab04634-02a9-4c53-8b5a-6bf7958b6d1f no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 15:19:47.563: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8566" for this suite.
Jan  4 15:19:53.705: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 15:19:53.774: INFO: namespace downward-api-8566 deletion completed in 6.201339474s

• [SLOW TEST:18.789 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 15:19:53.774: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test env composition
Jan  4 15:19:53.944: INFO: Waiting up to 5m0s for pod "var-expansion-e41a5943-f4ea-466e-8095-e660559c7b0f" in namespace "var-expansion-6507" to be "success or failure"
Jan  4 15:19:53.951: INFO: Pod "var-expansion-e41a5943-f4ea-466e-8095-e660559c7b0f": Phase="Pending", Reason="", readiness=false. Elapsed: 5.970999ms
Jan  4 15:19:55.957: INFO: Pod "var-expansion-e41a5943-f4ea-466e-8095-e660559c7b0f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012594855s
Jan  4 15:19:57.970: INFO: Pod "var-expansion-e41a5943-f4ea-466e-8095-e660559c7b0f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025210093s
Jan  4 15:19:59.975: INFO: Pod "var-expansion-e41a5943-f4ea-466e-8095-e660559c7b0f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.030318393s
Jan  4 15:20:01.999: INFO: Pod "var-expansion-e41a5943-f4ea-466e-8095-e660559c7b0f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.054345688s
Jan  4 15:20:04.067: INFO: Pod "var-expansion-e41a5943-f4ea-466e-8095-e660559c7b0f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.122910225s
Jan  4 15:20:06.075: INFO: Pod "var-expansion-e41a5943-f4ea-466e-8095-e660559c7b0f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.13091602s
STEP: Saw pod success
Jan  4 15:20:06.076: INFO: Pod "var-expansion-e41a5943-f4ea-466e-8095-e660559c7b0f" satisfied condition "success or failure"
Jan  4 15:20:06.079: INFO: Trying to get logs from node iruya-node pod var-expansion-e41a5943-f4ea-466e-8095-e660559c7b0f container dapi-container: 
STEP: delete the pod
Jan  4 15:20:06.425: INFO: Waiting for pod var-expansion-e41a5943-f4ea-466e-8095-e660559c7b0f to disappear
Jan  4 15:20:06.438: INFO: Pod var-expansion-e41a5943-f4ea-466e-8095-e660559c7b0f no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 15:20:06.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-6507" for this suite.
Jan  4 15:20:12.478: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 15:20:12.558: INFO: namespace var-expansion-6507 deletion completed in 6.105769607s

• [SLOW TEST:18.784 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 15:20:12.559: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan  4 15:20:12.716: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 15:20:24.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-9679" for this suite.
Jan  4 15:21:26.793: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 15:21:26.903: INFO: namespace pods-9679 deletion completed in 1m2.126525332s

• [SLOW TEST:74.344 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 15:21:26.904: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Jan  4 15:21:38.042: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 15:21:39.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-281" for this suite.
Jan  4 15:22:01.139: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 15:22:01.219: INFO: namespace replicaset-281 deletion completed in 22.126660215s

• [SLOW TEST:34.315 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 15:22:01.220: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Jan  4 15:22:31.513: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  4 15:22:31.708: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  4 15:22:33.709: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  4 15:22:33.752: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  4 15:22:35.709: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  4 15:22:35.729: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  4 15:22:37.709: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  4 15:22:37.718: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  4 15:22:39.709: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  4 15:22:39.714: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  4 15:22:41.709: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  4 15:22:41.789: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  4 15:22:43.709: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  4 15:22:43.715: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  4 15:22:45.709: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  4 15:22:45.715: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  4 15:22:47.709: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  4 15:22:47.829: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  4 15:22:49.709: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  4 15:22:49.728: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  4 15:22:51.709: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  4 15:22:51.714: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  4 15:22:53.709: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  4 15:22:53.725: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  4 15:22:55.709: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  4 15:22:55.738: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  4 15:22:57.709: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  4 15:22:57.717: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  4 15:22:59.709: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  4 15:23:00.876: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  4 15:23:01.709: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  4 15:23:01.948: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  4 15:23:03.709: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  4 15:23:03.797: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  4 15:23:05.709: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  4 15:23:05.904: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  4 15:23:07.709: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  4 15:23:07.716: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 15:23:07.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-9103" for this suite.
Jan  4 15:23:31.775: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 15:23:31.920: INFO: namespace container-lifecycle-hook-9103 deletion completed in 24.168091129s

• [SLOW TEST:90.701 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 15:23:31.921: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Jan  4 15:23:32.278: INFO: PodSpec: initContainers in spec.initContainers
Jan  4 15:24:49.901: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-fb2782cc-ed74-49b8-aa57-27382562fd81", GenerateName:"", Namespace:"init-container-3201", SelfLink:"/api/v1/namespaces/init-container-3201/pods/pod-init-fb2782cc-ed74-49b8-aa57-27382562fd81", UID:"74eaac05-cf61-4229-956f-5f9b4eaa001f", ResourceVersion:"19286454", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63713748212, loc:(*time.Location)(0x7ea48a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"278283470"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-hq9xd", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0018d8000), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-hq9xd", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-hq9xd", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-hq9xd", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0019480f8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"iruya-node", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0021541e0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0019481b0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0019481d0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0019481d8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0019481dc), PreemptionPolicy:(*v1.PreemptionPolicy)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713748212, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713748212, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713748212, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713748212, loc:(*time.Location)(0x7ea48a0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.3.65", PodIP:"10.44.0.1", StartTime:(*v1.Time)(0xc002806060), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001e8a070)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001e8a0e0)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://81e1f83128aaa70e3578f2f77f07c5d9eaeff26f3e29cf5cac151bf2770ab512"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0028060a0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002806080), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 15:24:49.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-3201" for this suite.
Jan  4 15:25:14.048: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 15:25:14.172: INFO: namespace init-container-3201 deletion completed in 24.146483999s

• [SLOW TEST:102.251 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 15:25:14.172: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 15:25:14.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-9885" for this suite.
Jan  4 15:25:20.486: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 15:25:20.597: INFO: namespace services-9885 deletion completed in 6.167021342s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:6.425 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 15:25:20.598: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 15:25:32.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-9482" for this suite.
Jan  4 15:25:39.624: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 15:25:39.839: INFO: namespace kubelet-test-9482 deletion completed in 7.072751773s

• [SLOW TEST:19.241 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl expose 
  should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 15:25:39.840: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating Redis RC
Jan  4 15:25:40.026: INFO: namespace kubectl-6500
Jan  4 15:25:40.026: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6500'
Jan  4 15:25:42.491: INFO: stderr: ""
Jan  4 15:25:42.491: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Jan  4 15:25:43.517: INFO: Selector matched 1 pods for map[app:redis]
Jan  4 15:25:43.518: INFO: Found 0 / 1
Jan  4 15:25:44.507: INFO: Selector matched 1 pods for map[app:redis]
Jan  4 15:25:44.507: INFO: Found 0 / 1
Jan  4 15:25:45.499: INFO: Selector matched 1 pods for map[app:redis]
Jan  4 15:25:45.499: INFO: Found 0 / 1
Jan  4 15:25:46.510: INFO: Selector matched 1 pods for map[app:redis]
Jan  4 15:25:46.510: INFO: Found 0 / 1
Jan  4 15:25:47.504: INFO: Selector matched 1 pods for map[app:redis]
Jan  4 15:25:47.504: INFO: Found 0 / 1
Jan  4 15:25:48.543: INFO: Selector matched 1 pods for map[app:redis]
Jan  4 15:25:48.543: INFO: Found 0 / 1
Jan  4 15:25:49.507: INFO: Selector matched 1 pods for map[app:redis]
Jan  4 15:25:49.507: INFO: Found 0 / 1
Jan  4 15:25:50.623: INFO: Selector matched 1 pods for map[app:redis]
Jan  4 15:25:50.624: INFO: Found 0 / 1
Jan  4 15:25:51.509: INFO: Selector matched 1 pods for map[app:redis]
Jan  4 15:25:51.509: INFO: Found 0 / 1
Jan  4 15:25:52.634: INFO: Selector matched 1 pods for map[app:redis]
Jan  4 15:25:52.634: INFO: Found 0 / 1
Jan  4 15:25:53.517: INFO: Selector matched 1 pods for map[app:redis]
Jan  4 15:25:53.518: INFO: Found 0 / 1
Jan  4 15:25:54.569: INFO: Selector matched 1 pods for map[app:redis]
Jan  4 15:25:54.569: INFO: Found 0 / 1
Jan  4 15:25:55.521: INFO: Selector matched 1 pods for map[app:redis]
Jan  4 15:25:55.521: INFO: Found 0 / 1
Jan  4 15:25:56.502: INFO: Selector matched 1 pods for map[app:redis]
Jan  4 15:25:56.502: INFO: Found 1 / 1
Jan  4 15:25:56.502: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Jan  4 15:25:56.507: INFO: Selector matched 1 pods for map[app:redis]
Jan  4 15:25:56.507: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jan  4 15:25:56.507: INFO: wait on redis-master startup in kubectl-6500 
Jan  4 15:25:56.508: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-bsl5w redis-master --namespace=kubectl-6500'
Jan  4 15:25:56.936: INFO: stderr: ""
Jan  4 15:25:56.936: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 04 Jan 15:25:54.335 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 04 Jan 15:25:54.336 # Server started, Redis version 3.2.12\n1:M 04 Jan 15:25:54.337 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 04 Jan 15:25:54.338 * The server is now ready to accept connections on port 6379\n"
STEP: exposing RC
Jan  4 15:25:56.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-6500'
Jan  4 15:25:57.240: INFO: stderr: ""
Jan  4 15:25:57.240: INFO: stdout: "service/rm2 exposed\n"
Jan  4 15:25:57.284: INFO: Service rm2 in namespace kubectl-6500 found.
STEP: exposing service
Jan  4 15:25:59.297: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-6500'
Jan  4 15:25:59.762: INFO: stderr: ""
Jan  4 15:25:59.763: INFO: stdout: "service/rm3 exposed\n"
Jan  4 15:25:59.836: INFO: Service rm3 in namespace kubectl-6500 found.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 15:26:01.856: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6500" for this suite.
Jan  4 15:26:43.929: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 15:26:44.009: INFO: namespace kubectl-6500 deletion completed in 42.137268299s

• [SLOW TEST:64.169 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl expose
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create services for rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 15:26:44.010: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-projected-all-test-volume-223065ce-9de9-4e75-b0e6-7ef052712b02
STEP: Creating secret with name secret-projected-all-test-volume-2168fd7f-02bf-48f6-89a1-2a4743c17e03
STEP: Creating a pod to test Check all projections for projected volume plugin
Jan  4 15:26:44.303: INFO: Waiting up to 5m0s for pod "projected-volume-8cb81927-4af2-4ca8-a528-b886bb4958d4" in namespace "projected-4692" to be "success or failure"
Jan  4 15:26:44.314: INFO: Pod "projected-volume-8cb81927-4af2-4ca8-a528-b886bb4958d4": Phase="Pending", Reason="", readiness=false. Elapsed: 10.876687ms
Jan  4 15:26:46.324: INFO: Pod "projected-volume-8cb81927-4af2-4ca8-a528-b886bb4958d4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020653055s
Jan  4 15:26:48.340: INFO: Pod "projected-volume-8cb81927-4af2-4ca8-a528-b886bb4958d4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036950722s
Jan  4 15:26:50.366: INFO: Pod "projected-volume-8cb81927-4af2-4ca8-a528-b886bb4958d4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.062438305s
Jan  4 15:26:52.374: INFO: Pod "projected-volume-8cb81927-4af2-4ca8-a528-b886bb4958d4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.070640987s
Jan  4 15:26:54.382: INFO: Pod "projected-volume-8cb81927-4af2-4ca8-a528-b886bb4958d4": Phase="Pending", Reason="", readiness=false. Elapsed: 10.078462819s
Jan  4 15:26:56.397: INFO: Pod "projected-volume-8cb81927-4af2-4ca8-a528-b886bb4958d4": Phase="Pending", Reason="", readiness=false. Elapsed: 12.09356164s
Jan  4 15:26:58.433: INFO: Pod "projected-volume-8cb81927-4af2-4ca8-a528-b886bb4958d4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.130201821s
STEP: Saw pod success
Jan  4 15:26:58.433: INFO: Pod "projected-volume-8cb81927-4af2-4ca8-a528-b886bb4958d4" satisfied condition "success or failure"
Jan  4 15:26:58.445: INFO: Trying to get logs from node iruya-node pod projected-volume-8cb81927-4af2-4ca8-a528-b886bb4958d4 container projected-all-volume-test: 
STEP: delete the pod
Jan  4 15:26:58.777: INFO: Waiting for pod projected-volume-8cb81927-4af2-4ca8-a528-b886bb4958d4 to disappear
Jan  4 15:26:58.785: INFO: Pod projected-volume-8cb81927-4af2-4ca8-a528-b886bb4958d4 no longer exists
[AfterEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 15:26:58.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4692" for this suite.
Jan  4 15:27:04.844: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 15:27:04.972: INFO: namespace projected-4692 deletion completed in 6.173261325s

• [SLOW TEST:20.963 seconds]
[sig-storage] Projected combined
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 15:27:04.973: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Jan  4 15:27:17.793: INFO: Successfully updated pod "annotationupdatede81e1f9-bc1f-488f-88d4-0291a6be288d"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 15:27:19.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6644" for this suite.
Jan  4 15:27:41.955: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 15:27:42.065: INFO: namespace projected-6644 deletion completed in 22.164106494s

• [SLOW TEST:37.092 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 15:27:42.065: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 15:27:52.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-7644" for this suite.
Jan  4 15:28:36.339: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 15:28:36.452: INFO: namespace kubelet-test-7644 deletion completed in 44.129744719s

• [SLOW TEST:54.386 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a read only busybox container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187
    should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 15:28:36.452: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-198be251-2d3a-4436-9d2e-66949df085e6
STEP: Creating a pod to test consume secrets
Jan  4 15:28:36.701: INFO: Waiting up to 5m0s for pod "pod-secrets-74f4c7d4-91f7-435f-ac74-dd7f3a87fd88" in namespace "secrets-4827" to be "success or failure"
Jan  4 15:28:36.862: INFO: Pod "pod-secrets-74f4c7d4-91f7-435f-ac74-dd7f3a87fd88": Phase="Pending", Reason="", readiness=false. Elapsed: 160.961126ms
Jan  4 15:28:38.886: INFO: Pod "pod-secrets-74f4c7d4-91f7-435f-ac74-dd7f3a87fd88": Phase="Pending", Reason="", readiness=false. Elapsed: 2.184882676s
Jan  4 15:28:40.895: INFO: Pod "pod-secrets-74f4c7d4-91f7-435f-ac74-dd7f3a87fd88": Phase="Pending", Reason="", readiness=false. Elapsed: 4.193647612s
Jan  4 15:28:42.908: INFO: Pod "pod-secrets-74f4c7d4-91f7-435f-ac74-dd7f3a87fd88": Phase="Pending", Reason="", readiness=false. Elapsed: 6.207132661s
Jan  4 15:28:44.925: INFO: Pod "pod-secrets-74f4c7d4-91f7-435f-ac74-dd7f3a87fd88": Phase="Pending", Reason="", readiness=false. Elapsed: 8.223760232s
Jan  4 15:28:46.945: INFO: Pod "pod-secrets-74f4c7d4-91f7-435f-ac74-dd7f3a87fd88": Phase="Pending", Reason="", readiness=false. Elapsed: 10.24323079s
Jan  4 15:28:48.950: INFO: Pod "pod-secrets-74f4c7d4-91f7-435f-ac74-dd7f3a87fd88": Phase="Pending", Reason="", readiness=false. Elapsed: 12.248994073s
Jan  4 15:28:50.960: INFO: Pod "pod-secrets-74f4c7d4-91f7-435f-ac74-dd7f3a87fd88": Phase="Pending", Reason="", readiness=false. Elapsed: 14.258230508s
Jan  4 15:28:52.973: INFO: Pod "pod-secrets-74f4c7d4-91f7-435f-ac74-dd7f3a87fd88": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.272005891s
STEP: Saw pod success
Jan  4 15:28:52.974: INFO: Pod "pod-secrets-74f4c7d4-91f7-435f-ac74-dd7f3a87fd88" satisfied condition "success or failure"
Jan  4 15:28:52.978: INFO: Trying to get logs from node iruya-node pod pod-secrets-74f4c7d4-91f7-435f-ac74-dd7f3a87fd88 container secret-env-test: 
STEP: delete the pod
Jan  4 15:28:53.052: INFO: Waiting for pod pod-secrets-74f4c7d4-91f7-435f-ac74-dd7f3a87fd88 to disappear
Jan  4 15:28:53.140: INFO: Pod pod-secrets-74f4c7d4-91f7-435f-ac74-dd7f3a87fd88 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 15:28:53.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4827" for this suite.
Jan  4 15:28:59.163: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 15:28:59.232: INFO: namespace secrets-4827 deletion completed in 6.084774948s

• [SLOW TEST:22.780 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 15:28:59.232: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-fd0e9e95-d7f6-450c-9ecf-47bd8a38aefe
STEP: Creating a pod to test consume secrets
Jan  4 15:29:00.161: INFO: Waiting up to 5m0s for pod "pod-secrets-d636c5e8-6418-472d-b89a-4e7e43ed4f60" in namespace "secrets-2170" to be "success or failure"
Jan  4 15:29:00.180: INFO: Pod "pod-secrets-d636c5e8-6418-472d-b89a-4e7e43ed4f60": Phase="Pending", Reason="", readiness=false. Elapsed: 18.436112ms
Jan  4 15:29:02.192: INFO: Pod "pod-secrets-d636c5e8-6418-472d-b89a-4e7e43ed4f60": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030600013s
Jan  4 15:29:04.198: INFO: Pod "pod-secrets-d636c5e8-6418-472d-b89a-4e7e43ed4f60": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036360251s
Jan  4 15:29:06.215: INFO: Pod "pod-secrets-d636c5e8-6418-472d-b89a-4e7e43ed4f60": Phase="Pending", Reason="", readiness=false. Elapsed: 6.053275383s
Jan  4 15:29:08.224: INFO: Pod "pod-secrets-d636c5e8-6418-472d-b89a-4e7e43ed4f60": Phase="Pending", Reason="", readiness=false. Elapsed: 8.062930706s
Jan  4 15:29:10.234: INFO: Pod "pod-secrets-d636c5e8-6418-472d-b89a-4e7e43ed4f60": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.072575743s
STEP: Saw pod success
Jan  4 15:29:10.234: INFO: Pod "pod-secrets-d636c5e8-6418-472d-b89a-4e7e43ed4f60" satisfied condition "success or failure"
Jan  4 15:29:10.248: INFO: Trying to get logs from node iruya-node pod pod-secrets-d636c5e8-6418-472d-b89a-4e7e43ed4f60 container secret-volume-test: 
STEP: delete the pod
Jan  4 15:29:10.395: INFO: Waiting for pod pod-secrets-d636c5e8-6418-472d-b89a-4e7e43ed4f60 to disappear
Jan  4 15:29:10.402: INFO: Pod pod-secrets-d636c5e8-6418-472d-b89a-4e7e43ed4f60 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 15:29:10.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2170" for this suite.
Jan  4 15:29:16.432: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 15:29:16.523: INFO: namespace secrets-2170 deletion completed in 6.115946701s
STEP: Destroying namespace "secret-namespace-5216" for this suite.
Jan  4 15:29:22.544: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 15:29:22.682: INFO: namespace secret-namespace-5216 deletion completed in 6.158851171s

• [SLOW TEST:23.451 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 15:29:22.683: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan  4 15:29:22.809: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9d8acc7e-a3ef-405f-93c5-344f493b2ad3" in namespace "projected-7178" to be "success or failure"
Jan  4 15:29:22.814: INFO: Pod "downwardapi-volume-9d8acc7e-a3ef-405f-93c5-344f493b2ad3": Phase="Pending", Reason="", readiness=false. Elapsed: 5.156353ms
Jan  4 15:29:24.821: INFO: Pod "downwardapi-volume-9d8acc7e-a3ef-405f-93c5-344f493b2ad3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012288036s
Jan  4 15:29:26.831: INFO: Pod "downwardapi-volume-9d8acc7e-a3ef-405f-93c5-344f493b2ad3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022681679s
Jan  4 15:29:28.892: INFO: Pod "downwardapi-volume-9d8acc7e-a3ef-405f-93c5-344f493b2ad3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.083293662s
Jan  4 15:29:31.389: INFO: Pod "downwardapi-volume-9d8acc7e-a3ef-405f-93c5-344f493b2ad3": Phase="Pending", Reason="", readiness=false. Elapsed: 8.580670481s
Jan  4 15:29:33.403: INFO: Pod "downwardapi-volume-9d8acc7e-a3ef-405f-93c5-344f493b2ad3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.594597319s
STEP: Saw pod success
Jan  4 15:29:33.404: INFO: Pod "downwardapi-volume-9d8acc7e-a3ef-405f-93c5-344f493b2ad3" satisfied condition "success or failure"
Jan  4 15:29:33.412: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-9d8acc7e-a3ef-405f-93c5-344f493b2ad3 container client-container: 
STEP: delete the pod
Jan  4 15:29:33.674: INFO: Waiting for pod downwardapi-volume-9d8acc7e-a3ef-405f-93c5-344f493b2ad3 to disappear
Jan  4 15:29:33.692: INFO: Pod downwardapi-volume-9d8acc7e-a3ef-405f-93c5-344f493b2ad3 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 15:29:33.692: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7178" for this suite.
Jan  4 15:29:39.731: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 15:29:39.897: INFO: namespace projected-7178 deletion completed in 6.198673506s

• [SLOW TEST:17.214 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 15:29:39.897: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there are no pods in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 15:30:12.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-8427" for this suite.
Jan  4 15:30:18.578: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 15:30:18.696: INFO: namespace namespaces-8427 deletion completed in 6.154820471s
STEP: Destroying namespace "nsdeletetest-2935" for this suite.
Jan  4 15:30:18.699: INFO: Namespace nsdeletetest-2935 was already deleted
STEP: Destroying namespace "nsdeletetest-8644" for this suite.
Jan  4 15:30:24.760: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 15:30:24.879: INFO: namespace nsdeletetest-8644 deletion completed in 6.180186463s

• [SLOW TEST:44.982 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 15:30:24.880: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-823
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace statefulset-823
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-823
Jan  4 15:30:25.078: INFO: Found 0 stateful pods, waiting for 1
Jan  4 15:30:35.116: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Jan  4 15:30:35.120: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-823 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan  4 15:30:35.626: INFO: stderr: "I0104 15:30:35.308212    1924 log.go:172] (0xc0007e6420) (0xc0007326e0) Create stream\nI0104 15:30:35.308397    1924 log.go:172] (0xc0007e6420) (0xc0007326e0) Stream added, broadcasting: 1\nI0104 15:30:35.316915    1924 log.go:172] (0xc0007e6420) Reply frame received for 1\nI0104 15:30:35.317019    1924 log.go:172] (0xc0007e6420) (0xc000732780) Create stream\nI0104 15:30:35.317032    1924 log.go:172] (0xc0007e6420) (0xc000732780) Stream added, broadcasting: 3\nI0104 15:30:35.319277    1924 log.go:172] (0xc0007e6420) Reply frame received for 3\nI0104 15:30:35.319314    1924 log.go:172] (0xc0007e6420) (0xc0004f2000) Create stream\nI0104 15:30:35.319322    1924 log.go:172] (0xc0007e6420) (0xc0004f2000) Stream added, broadcasting: 5\nI0104 15:30:35.320683    1924 log.go:172] (0xc0007e6420) Reply frame received for 5\nI0104 15:30:35.445664    1924 log.go:172] (0xc0007e6420) Data frame received for 5\nI0104 15:30:35.445785    1924 log.go:172] (0xc0004f2000) (5) Data frame handling\nI0104 15:30:35.445811    1924 log.go:172] (0xc0004f2000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0104 15:30:35.489160    1924 log.go:172] (0xc0007e6420) Data frame received for 3\nI0104 15:30:35.489188    1924 log.go:172] (0xc000732780) (3) Data frame handling\nI0104 15:30:35.489206    1924 log.go:172] (0xc000732780) (3) Data frame sent\nI0104 15:30:35.614342    1924 log.go:172] (0xc0007e6420) Data frame received for 1\nI0104 15:30:35.614505    1924 log.go:172] (0xc0007e6420) (0xc000732780) Stream removed, broadcasting: 3\nI0104 15:30:35.614630    1924 log.go:172] (0xc0007326e0) (1) Data frame handling\nI0104 15:30:35.614660    1924 log.go:172] (0xc0007326e0) (1) Data frame sent\nI0104 15:30:35.614674    1924 log.go:172] (0xc0007e6420) (0xc0007326e0) Stream removed, broadcasting: 1\nI0104 15:30:35.615111    1924 log.go:172] (0xc0007e6420) (0xc0004f2000) Stream removed, broadcasting: 5\nI0104 15:30:35.615140    1924 log.go:172] (0xc0007e6420) (0xc0007326e0) Stream removed, broadcasting: 1\nI0104 15:30:35.615149    1924 log.go:172] (0xc0007e6420) (0xc000732780) Stream removed, broadcasting: 3\nI0104 15:30:35.615161    1924 log.go:172] (0xc0007e6420) (0xc0004f2000) Stream removed, broadcasting: 5\nI0104 15:30:35.615419    1924 log.go:172] (0xc0007e6420) Go away received\n"
Jan  4 15:30:35.626: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan  4 15:30:35.626: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan  4 15:30:35.733: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Jan  4 15:30:45.739: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan  4 15:30:45.740: INFO: Waiting for statefulset status.replicas updated to 0
Jan  4 15:30:45.759: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999654s
Jan  4 15:30:46.767: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.994350658s
Jan  4 15:30:47.778: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.985727075s
Jan  4 15:30:48.815: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.975010569s
Jan  4 15:30:49.828: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.937318788s
Jan  4 15:30:50.838: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.925363556s
Jan  4 15:30:51.846: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.91459713s
Jan  4 15:30:52.872: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.906503115s
Jan  4 15:30:53.888: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.880104821s
Jan  4 15:30:54.905: INFO: Verifying statefulset ss doesn't scale past 1 for another 864.07518ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-823
Jan  4 15:30:55.921: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-823 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  4 15:30:56.414: INFO: stderr: "I0104 15:30:56.128393    1945 log.go:172] (0xc0009a80b0) (0xc0008b25a0) Create stream\nI0104 15:30:56.128781    1945 log.go:172] (0xc0009a80b0) (0xc0008b25a0) Stream added, broadcasting: 1\nI0104 15:30:56.142325    1945 log.go:172] (0xc0009a80b0) Reply frame received for 1\nI0104 15:30:56.142431    1945 log.go:172] (0xc0009a80b0) (0xc000512280) Create stream\nI0104 15:30:56.142459    1945 log.go:172] (0xc0009a80b0) (0xc000512280) Stream added, broadcasting: 3\nI0104 15:30:56.147070    1945 log.go:172] (0xc0009a80b0) Reply frame received for 3\nI0104 15:30:56.147152    1945 log.go:172] (0xc0009a80b0) (0xc00020e000) Create stream\nI0104 15:30:56.147190    1945 log.go:172] (0xc0009a80b0) (0xc00020e000) Stream added, broadcasting: 5\nI0104 15:30:56.153439    1945 log.go:172] (0xc0009a80b0) Reply frame received for 5\nI0104 15:30:56.278458    1945 log.go:172] (0xc0009a80b0) Data frame received for 3\nI0104 15:30:56.278506    1945 log.go:172] (0xc000512280) (3) Data frame handling\nI0104 15:30:56.278528    1945 log.go:172] (0xc000512280) (3) Data frame sent\nI0104 15:30:56.278570    1945 log.go:172] (0xc0009a80b0) Data frame received for 5\nI0104 15:30:56.278593    1945 log.go:172] (0xc00020e000) (5) Data frame handling\nI0104 15:30:56.278598    1945 log.go:172] (0xc00020e000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0104 15:30:56.405817    1945 log.go:172] (0xc0009a80b0) Data frame received for 1\nI0104 15:30:56.406088    1945 log.go:172] (0xc0009a80b0) (0xc000512280) Stream removed, broadcasting: 3\nI0104 15:30:56.406131    1945 log.go:172] (0xc0008b25a0) (1) Data frame handling\nI0104 15:30:56.406152    1945 log.go:172] (0xc0008b25a0) (1) Data frame sent\nI0104 15:30:56.406165    1945 log.go:172] (0xc0009a80b0) (0xc0008b25a0) Stream removed, broadcasting: 1\nI0104 15:30:56.406185    1945 log.go:172] (0xc0009a80b0) (0xc00020e000) Stream removed, broadcasting: 5\nI0104 15:30:56.406242    1945 log.go:172] (0xc0009a80b0) Go away received\nI0104 15:30:56.406768    1945 log.go:172] (0xc0009a80b0) (0xc0008b25a0) Stream removed, broadcasting: 1\nI0104 15:30:56.406808    1945 log.go:172] (0xc0009a80b0) (0xc000512280) Stream removed, broadcasting: 3\nI0104 15:30:56.406840    1945 log.go:172] (0xc0009a80b0) (0xc00020e000) Stream removed, broadcasting: 5\n"
Jan  4 15:30:56.414: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan  4 15:30:56.414: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan  4 15:30:56.424: INFO: Found 1 stateful pods, waiting for 3
Jan  4 15:31:06.435: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jan  4 15:31:06.435: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jan  4 15:31:06.435: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false
Jan  4 15:31:16.438: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jan  4 15:31:16.438: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jan  4 15:31:16.438: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Jan  4 15:31:16.448: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-823 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan  4 15:31:16.986: INFO: stderr: "I0104 15:31:16.644923    1964 log.go:172] (0xc0004fa420) (0xc0006ac640) Create stream\nI0104 15:31:16.645070    1964 log.go:172] (0xc0004fa420) (0xc0006ac640) Stream added, broadcasting: 1\nI0104 15:31:16.652717    1964 log.go:172] (0xc0004fa420) Reply frame received for 1\nI0104 15:31:16.652743    1964 log.go:172] (0xc0004fa420) (0xc00044a000) Create stream\nI0104 15:31:16.652754    1964 log.go:172] (0xc0004fa420) (0xc00044a000) Stream added, broadcasting: 3\nI0104 15:31:16.656339    1964 log.go:172] (0xc0004fa420) Reply frame received for 3\nI0104 15:31:16.656401    1964 log.go:172] (0xc0004fa420) (0xc0006ac6e0) Create stream\nI0104 15:31:16.656430    1964 log.go:172] (0xc0004fa420) (0xc0006ac6e0) Stream added, broadcasting: 5\nI0104 15:31:16.659662    1964 log.go:172] (0xc0004fa420) Reply frame received for 5\nI0104 15:31:16.763037    1964 log.go:172] (0xc0004fa420) Data frame received for 5\nI0104 15:31:16.763092    1964 log.go:172] (0xc0006ac6e0) (5) Data frame handling\nI0104 15:31:16.763101    1964 log.go:172] (0xc0006ac6e0) (5) Data frame sent\nI0104 15:31:16.763106    1964 log.go:172] (0xc0004fa420) Data frame received for 3\nI0104 15:31:16.763114    1964 log.go:172] (0xc00044a000) (3) Data frame handling\nI0104 15:31:16.763120    1964 log.go:172] (0xc00044a000) (3) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0104 15:31:16.979385    1964 log.go:172] (0xc0004fa420) (0xc00044a000) Stream removed, broadcasting: 3\nI0104 15:31:16.979504    1964 log.go:172] (0xc0004fa420) Data frame received for 1\nI0104 15:31:16.979564    1964 log.go:172] (0xc0004fa420) (0xc0006ac6e0) Stream removed, broadcasting: 5\nI0104 15:31:16.979598    1964 log.go:172] (0xc0006ac640) (1) Data frame handling\nI0104 15:31:16.979616    1964 log.go:172] (0xc0006ac640) (1) Data frame sent\nI0104 15:31:16.979642    1964 log.go:172] (0xc0004fa420) (0xc0006ac640) Stream removed, broadcasting: 1\nI0104 15:31:16.979660    1964 log.go:172] (0xc0004fa420) Go away received\nI0104 15:31:16.980312    1964 log.go:172] (0xc0004fa420) (0xc0006ac640) Stream removed, broadcasting: 1\nI0104 15:31:16.980330    1964 log.go:172] (0xc0004fa420) (0xc00044a000) Stream removed, broadcasting: 3\nI0104 15:31:16.980341    1964 log.go:172] (0xc0004fa420) (0xc0006ac6e0) Stream removed, broadcasting: 5\n"
Jan  4 15:31:16.987: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan  4 15:31:16.987: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan  4 15:31:16.987: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-823 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan  4 15:31:17.354: INFO: stderr: "I0104 15:31:17.113498    1982 log.go:172] (0xc0008ca420) (0xc000828640) Create stream\nI0104 15:31:17.113540    1982 log.go:172] (0xc0008ca420) (0xc000828640) Stream added, broadcasting: 1\nI0104 15:31:17.116134    1982 log.go:172] (0xc0008ca420) Reply frame received for 1\nI0104 15:31:17.116164    1982 log.go:172] (0xc0008ca420) (0xc0005e0280) Create stream\nI0104 15:31:17.116175    1982 log.go:172] (0xc0008ca420) (0xc0005e0280) Stream added, broadcasting: 3\nI0104 15:31:17.116916    1982 log.go:172] (0xc0008ca420) Reply frame received for 3\nI0104 15:31:17.116935    1982 log.go:172] (0xc0008ca420) (0xc0008286e0) Create stream\nI0104 15:31:17.116943    1982 log.go:172] (0xc0008ca420) (0xc0008286e0) Stream added, broadcasting: 5\nI0104 15:31:17.117602    1982 log.go:172] (0xc0008ca420) Reply frame received for 5\nI0104 15:31:17.228408    1982 log.go:172] (0xc0008ca420) Data frame received for 5\nI0104 15:31:17.228464    1982 log.go:172] (0xc0008286e0) (5) Data frame handling\nI0104 15:31:17.228506    1982 log.go:172] (0xc0008286e0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0104 15:31:17.264685    1982 log.go:172] (0xc0008ca420) Data frame received for 3\nI0104 15:31:17.264730    1982 log.go:172] (0xc0005e0280) (3) Data frame handling\nI0104 15:31:17.264762    1982 log.go:172] (0xc0005e0280) (3) Data frame sent\nI0104 15:31:17.348904    1982 log.go:172] (0xc0008ca420) Data frame received for 1\nI0104 15:31:17.349098    1982 log.go:172] (0xc0008ca420) (0xc0005e0280) Stream removed, broadcasting: 3\nI0104 15:31:17.349148    1982 log.go:172] (0xc000828640) (1) Data frame handling\nI0104 15:31:17.349172    1982 log.go:172] (0xc000828640) (1) Data frame sent\nI0104 15:31:17.349200    1982 log.go:172] (0xc0008ca420) (0xc0008286e0) Stream removed, broadcasting: 5\nI0104 15:31:17.349228    1982 log.go:172] (0xc0008ca420) (0xc000828640) Stream removed, broadcasting: 1\nI0104 15:31:17.349239    1982 log.go:172] (0xc0008ca420) Go away received\nI0104 15:31:17.349804    1982 log.go:172] (0xc0008ca420) (0xc000828640) Stream removed, broadcasting: 1\nI0104 15:31:17.349822    1982 log.go:172] (0xc0008ca420) (0xc0005e0280) Stream removed, broadcasting: 3\nI0104 15:31:17.349828    1982 log.go:172] (0xc0008ca420) (0xc0008286e0) Stream removed, broadcasting: 5\n"
Jan  4 15:31:17.355: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan  4 15:31:17.355: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan  4 15:31:17.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-823 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan  4 15:31:17.708: INFO: stderr: "I0104 15:31:17.479733    1997 log.go:172] (0xc00052a420) (0xc000844780) Create stream\nI0104 15:31:17.479892    1997 log.go:172] (0xc00052a420) (0xc000844780) Stream added, broadcasting: 1\nI0104 15:31:17.485246    1997 log.go:172] (0xc00052a420) Reply frame received for 1\nI0104 15:31:17.485267    1997 log.go:172] (0xc00052a420) (0xc000844820) Create stream\nI0104 15:31:17.485272    1997 log.go:172] (0xc00052a420) (0xc000844820) Stream added, broadcasting: 3\nI0104 15:31:17.486046    1997 log.go:172] (0xc00052a420) Reply frame received for 3\nI0104 15:31:17.486071    1997 log.go:172] (0xc00052a420) (0xc000317ae0) Create stream\nI0104 15:31:17.486090    1997 log.go:172] (0xc00052a420) (0xc000317ae0) Stream added, broadcasting: 5\nI0104 15:31:17.487281    1997 log.go:172] (0xc00052a420) Reply frame received for 5\nI0104 15:31:17.575952    1997 log.go:172] (0xc00052a420) Data frame received for 5\nI0104 15:31:17.576042    1997 log.go:172] (0xc000317ae0) (5) Data frame handling\nI0104 15:31:17.576070    1997 log.go:172] (0xc000317ae0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0104 15:31:17.608882    1997 log.go:172] (0xc00052a420) Data frame received for 3\nI0104 15:31:17.608912    1997 log.go:172] (0xc000844820) (3) Data frame handling\nI0104 15:31:17.608931    1997 log.go:172] (0xc000844820) (3) Data frame sent\nI0104 15:31:17.700578    1997 log.go:172] (0xc00052a420) Data frame received for 1\nI0104 15:31:17.700674    1997 log.go:172] (0xc00052a420) (0xc000844820) Stream removed, broadcasting: 3\nI0104 15:31:17.700962    1997 log.go:172] (0xc000844780) (1) Data frame handling\nI0104 15:31:17.700992    1997 log.go:172] (0xc000844780) (1) Data frame sent\nI0104 15:31:17.700997    1997 log.go:172] (0xc00052a420) (0xc000317ae0) Stream removed, broadcasting: 5\nI0104 15:31:17.701083    1997 log.go:172] (0xc00052a420) (0xc000844780) Stream removed, broadcasting: 1\nI0104 15:31:17.701109    1997 log.go:172] (0xc00052a420) Go away received\nI0104 15:31:17.701642    1997 log.go:172] (0xc00052a420) (0xc000844780) Stream removed, broadcasting: 1\nI0104 15:31:17.701667    1997 log.go:172] (0xc00052a420) (0xc000844820) Stream removed, broadcasting: 3\nI0104 15:31:17.701684    1997 log.go:172] (0xc00052a420) (0xc000317ae0) Stream removed, broadcasting: 5\n"
Jan  4 15:31:17.708: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan  4 15:31:17.708: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan  4 15:31:17.708: INFO: Waiting for statefulset status.replicas updated to 0
Jan  4 15:31:17.713: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2
Jan  4 15:31:27.731: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan  4 15:31:27.732: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Jan  4 15:31:27.732: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Jan  4 15:31:27.778: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999471s
Jan  4 15:31:28.784: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.965171143s
Jan  4 15:31:29.796: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.959023806s
Jan  4 15:31:30.809: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.946460061s
Jan  4 15:31:31.820: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.93411485s
Jan  4 15:31:32.828: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.922898832s
Jan  4 15:31:33.839: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.915134652s
Jan  4 15:31:34.858: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.903871785s
Jan  4 15:31:35.869: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.884565369s
Jan  4 15:31:36.881: INFO: Verifying statefulset ss doesn't scale past 3 for another 873.506184ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-823
Jan  4 15:31:37.894: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-823 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  4 15:31:38.409: INFO: stderr: "I0104 15:31:38.102130    2017 log.go:172] (0xc00051c2c0) (0xc0004b28c0) Create stream\nI0104 15:31:38.102255    2017 log.go:172] (0xc00051c2c0) (0xc0004b28c0) Stream added, broadcasting: 1\nI0104 15:31:38.114285    2017 log.go:172] (0xc00051c2c0) Reply frame received for 1\nI0104 15:31:38.114337    2017 log.go:172] (0xc00051c2c0) (0xc0004b2960) Create stream\nI0104 15:31:38.114351    2017 log.go:172] (0xc00051c2c0) (0xc0004b2960) Stream added, broadcasting: 3\nI0104 15:31:38.116584    2017 log.go:172] (0xc00051c2c0) Reply frame received for 3\nI0104 15:31:38.116660    2017 log.go:172] (0xc00051c2c0) (0xc0008ea000) Create stream\nI0104 15:31:38.116676    2017 log.go:172] (0xc00051c2c0) (0xc0008ea000) Stream added, broadcasting: 5\nI0104 15:31:38.120802    2017 log.go:172] (0xc00051c2c0) Reply frame received for 5\nI0104 15:31:38.233289    2017 log.go:172] (0xc00051c2c0) Data frame received for 5\nI0104 15:31:38.233381    2017 log.go:172] (0xc0008ea000) (5) Data frame handling\nI0104 15:31:38.233415    2017 log.go:172] (0xc0008ea000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0104 15:31:38.233439    2017 log.go:172] (0xc00051c2c0) Data frame received for 3\nI0104 15:31:38.233457    2017 log.go:172] (0xc0004b2960) (3) Data frame handling\nI0104 15:31:38.233470    2017 log.go:172] (0xc0004b2960) (3) Data frame sent\nI0104 15:31:38.381769    2017 log.go:172] (0xc00051c2c0) (0xc0004b2960) Stream removed, broadcasting: 3\nI0104 15:31:38.381978    2017 log.go:172] (0xc00051c2c0) Data frame received for 1\nI0104 15:31:38.382013    2017 log.go:172] (0xc0004b28c0) (1) Data frame handling\nI0104 15:31:38.382035    2017 log.go:172] (0xc0004b28c0) (1) Data frame sent\nI0104 15:31:38.382052    2017 log.go:172] (0xc00051c2c0) (0xc0004b28c0) Stream removed, broadcasting: 1\nI0104 15:31:38.382127    2017 log.go:172] (0xc00051c2c0) (0xc0008ea000) Stream removed, broadcasting: 5\nI0104 15:31:38.382196    2017 log.go:172] (0xc00051c2c0) Go away received\nI0104 15:31:38.383262    2017 log.go:172] (0xc00051c2c0) (0xc0004b28c0) Stream removed, broadcasting: 1\nI0104 15:31:38.383294    2017 log.go:172] (0xc00051c2c0) (0xc0004b2960) Stream removed, broadcasting: 3\nI0104 15:31:38.383328    2017 log.go:172] (0xc00051c2c0) (0xc0008ea000) Stream removed, broadcasting: 5\n"
Jan  4 15:31:38.409: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan  4 15:31:38.409: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan  4 15:31:38.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-823 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  4 15:31:38.930: INFO: stderr: "I0104 15:31:38.610083    2036 log.go:172] (0xc0006fa0b0) (0xc0007605a0) Create stream\nI0104 15:31:38.610400    2036 log.go:172] (0xc0006fa0b0) (0xc0007605a0) Stream added, broadcasting: 1\nI0104 15:31:38.615343    2036 log.go:172] (0xc0006fa0b0) Reply frame received for 1\nI0104 15:31:38.615380    2036 log.go:172] (0xc0006fa0b0) (0xc00070e280) Create stream\nI0104 15:31:38.615393    2036 log.go:172] (0xc0006fa0b0) (0xc00070e280) Stream added, broadcasting: 3\nI0104 15:31:38.617045    2036 log.go:172] (0xc0006fa0b0) Reply frame received for 3\nI0104 15:31:38.617066    2036 log.go:172] (0xc0006fa0b0) (0xc000336000) Create stream\nI0104 15:31:38.617075    2036 log.go:172] (0xc0006fa0b0) (0xc000336000) Stream added, broadcasting: 5\nI0104 15:31:38.618259    2036 log.go:172] (0xc0006fa0b0) Reply frame received for 5\nI0104 15:31:38.810240    2036 log.go:172] (0xc0006fa0b0) Data frame received for 3\nI0104 15:31:38.810288    2036 log.go:172] (0xc00070e280) (3) Data frame handling\nI0104 15:31:38.810304    2036 log.go:172] (0xc00070e280) (3) Data frame sent\nI0104 15:31:38.810818    2036 log.go:172] (0xc0006fa0b0) Data frame received for 5\nI0104 15:31:38.810830    2036 log.go:172] (0xc000336000) (5) Data frame handling\nI0104 15:31:38.810838    2036 log.go:172] (0xc000336000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0104 15:31:38.926622    2036 log.go:172] (0xc0006fa0b0) (0xc00070e280) Stream removed, broadcasting: 3\nI0104 15:31:38.926675    2036 log.go:172] (0xc0006fa0b0) Data frame received for 1\nI0104 15:31:38.926722    2036 log.go:172] (0xc0006fa0b0) (0xc000336000) Stream removed, broadcasting: 5\nI0104 15:31:38.926753    2036 log.go:172] (0xc0007605a0) (1) Data frame handling\nI0104 15:31:38.926764    2036 log.go:172] (0xc0007605a0) (1) Data frame sent\nI0104 15:31:38.926775    2036 log.go:172] (0xc0006fa0b0) (0xc0007605a0) Stream removed, broadcasting: 1\nI0104 15:31:38.926784    2036 log.go:172] (0xc0006fa0b0) Go away received\nI0104 15:31:38.927237    2036 log.go:172] (0xc0006fa0b0) (0xc0007605a0) Stream removed, broadcasting: 1\nI0104 15:31:38.927246    2036 log.go:172] (0xc0006fa0b0) (0xc00070e280) Stream removed, broadcasting: 3\nI0104 15:31:38.927251    2036 log.go:172] (0xc0006fa0b0) (0xc000336000) Stream removed, broadcasting: 5\n"
Jan  4 15:31:38.930: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan  4 15:31:38.930: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan  4 15:31:38.930: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-823 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  4 15:31:39.572: INFO: stderr: "I0104 15:31:39.117396    2050 log.go:172] (0xc00092c790) (0xc0009d6a00) Create stream\nI0104 15:31:39.117616    2050 log.go:172] (0xc00092c790) (0xc0009d6a00) Stream added, broadcasting: 1\nI0104 15:31:39.131554    2050 log.go:172] (0xc00092c790) Reply frame received for 1\nI0104 15:31:39.131618    2050 log.go:172] (0xc00092c790) (0xc0009d6000) Create stream\nI0104 15:31:39.131630    2050 log.go:172] (0xc00092c790) (0xc0009d6000) Stream added, broadcasting: 3\nI0104 15:31:39.133449    2050 log.go:172] (0xc00092c790) Reply frame received for 3\nI0104 15:31:39.133488    2050 log.go:172] (0xc00092c790) (0xc0005520a0) Create stream\nI0104 15:31:39.133501    2050 log.go:172] (0xc00092c790) (0xc0005520a0) Stream added, broadcasting: 5\nI0104 15:31:39.134674    2050 log.go:172] (0xc00092c790) Reply frame received for 5\nI0104 15:31:39.305381    2050 log.go:172] (0xc00092c790) Data frame received for 3\nI0104 15:31:39.305547    2050 log.go:172] (0xc0009d6000) (3) Data frame handling\nI0104 15:31:39.305586    2050 log.go:172] (0xc0009d6000) (3) Data frame sent\nI0104 15:31:39.305758    2050 log.go:172] (0xc00092c790) Data frame received for 5\nI0104 15:31:39.305867    2050 log.go:172] (0xc0005520a0) (5) Data frame handling\nI0104 15:31:39.305920    2050 log.go:172] (0xc0005520a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0104 15:31:39.562403    2050 log.go:172] (0xc00092c790) (0xc0009d6000) Stream removed, broadcasting: 3\nI0104 15:31:39.562515    2050 log.go:172] (0xc00092c790) Data frame received for 1\nI0104 15:31:39.562541    2050 log.go:172] (0xc0009d6a00) (1) Data frame handling\nI0104 15:31:39.562590    2050 log.go:172] (0xc0009d6a00) (1) Data frame sent\nI0104 15:31:39.562601    2050 log.go:172] (0xc00092c790) (0xc0009d6a00) Stream removed, broadcasting: 1\nI0104 15:31:39.562947    2050 log.go:172] (0xc00092c790) (0xc0005520a0) Stream removed, broadcasting: 5\nI0104 15:31:39.563246    2050 log.go:172] (0xc00092c790) Go away received\nI0104 15:31:39.563441    2050 log.go:172] (0xc00092c790) (0xc0009d6a00) Stream removed, broadcasting: 1\nI0104 15:31:39.563473    2050 log.go:172] (0xc00092c790) (0xc0009d6000) Stream removed, broadcasting: 3\nI0104 15:31:39.563491    2050 log.go:172] (0xc00092c790) (0xc0005520a0) Stream removed, broadcasting: 5\n"
Jan  4 15:31:39.572: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan  4 15:31:39.572: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan  4 15:31:39.572: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Jan  4 15:32:19.596: INFO: Deleting all statefulset in ns statefulset-823
Jan  4 15:32:19.600: INFO: Scaling statefulset ss to 0
Jan  4 15:32:19.610: INFO: Waiting for statefulset status.replicas updated to 0
Jan  4 15:32:19.612: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 15:32:19.955: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-823" for this suite.
Jan  4 15:32:28.058: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 15:32:28.164: INFO: namespace statefulset-823 deletion completed in 8.202734982s

• [SLOW TEST:123.285 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 15:32:28.165: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-ec9ee70a-82dd-4843-b2f4-ba4c25f30aa9
STEP: Creating a pod to test consume secrets
Jan  4 15:32:28.324: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-ddce62ea-f6dc-4fbb-a2ab-7e022a4145b3" in namespace "projected-5370" to be "success or failure"
Jan  4 15:32:28.354: INFO: Pod "pod-projected-secrets-ddce62ea-f6dc-4fbb-a2ab-7e022a4145b3": Phase="Pending", Reason="", readiness=false. Elapsed: 29.627055ms
Jan  4 15:32:30.360: INFO: Pod "pod-projected-secrets-ddce62ea-f6dc-4fbb-a2ab-7e022a4145b3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036170871s
Jan  4 15:32:32.365: INFO: Pod "pod-projected-secrets-ddce62ea-f6dc-4fbb-a2ab-7e022a4145b3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041265666s
Jan  4 15:32:34.374: INFO: Pod "pod-projected-secrets-ddce62ea-f6dc-4fbb-a2ab-7e022a4145b3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.049758943s
Jan  4 15:32:36.627: INFO: Pod "pod-projected-secrets-ddce62ea-f6dc-4fbb-a2ab-7e022a4145b3": Phase="Pending", Reason="", readiness=false. Elapsed: 8.302480119s
Jan  4 15:32:38.633: INFO: Pod "pod-projected-secrets-ddce62ea-f6dc-4fbb-a2ab-7e022a4145b3": Phase="Pending", Reason="", readiness=false. Elapsed: 10.309338237s
Jan  4 15:32:40.639: INFO: Pod "pod-projected-secrets-ddce62ea-f6dc-4fbb-a2ab-7e022a4145b3": Phase="Pending", Reason="", readiness=false. Elapsed: 12.314775588s
Jan  4 15:32:42.662: INFO: Pod "pod-projected-secrets-ddce62ea-f6dc-4fbb-a2ab-7e022a4145b3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.337831112s
STEP: Saw pod success
Jan  4 15:32:42.662: INFO: Pod "pod-projected-secrets-ddce62ea-f6dc-4fbb-a2ab-7e022a4145b3" satisfied condition "success or failure"
Jan  4 15:32:42.665: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-ddce62ea-f6dc-4fbb-a2ab-7e022a4145b3 container projected-secret-volume-test: 
STEP: delete the pod
Jan  4 15:32:42.728: INFO: Waiting for pod pod-projected-secrets-ddce62ea-f6dc-4fbb-a2ab-7e022a4145b3 to disappear
Jan  4 15:32:42.732: INFO: Pod pod-projected-secrets-ddce62ea-f6dc-4fbb-a2ab-7e022a4145b3 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 15:32:42.732: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5370" for this suite.
Jan  4 15:32:49.059: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 15:32:49.352: INFO: namespace projected-5370 deletion completed in 6.615243011s

• [SLOW TEST:21.187 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 15:32:49.352: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W0104 15:33:34.372100       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan  4 15:33:34.372: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 15:33:34.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-3516" for this suite.
Jan  4 15:33:45.298: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 15:33:45.708: INFO: namespace gc-3516 deletion completed in 11.329759201s

• [SLOW TEST:56.355 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 15:33:45.708: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan  4 15:33:46.452: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8b48b400-3200-4d0f-9d20-ef3ead68607f" in namespace "downward-api-469" to be "success or failure"
Jan  4 15:33:46.736: INFO: Pod "downwardapi-volume-8b48b400-3200-4d0f-9d20-ef3ead68607f": Phase="Pending", Reason="", readiness=false. Elapsed: 283.547439ms
Jan  4 15:33:48.995: INFO: Pod "downwardapi-volume-8b48b400-3200-4d0f-9d20-ef3ead68607f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.543046104s
Jan  4 15:33:51.006: INFO: Pod "downwardapi-volume-8b48b400-3200-4d0f-9d20-ef3ead68607f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.553500896s
Jan  4 15:33:53.066: INFO: Pod "downwardapi-volume-8b48b400-3200-4d0f-9d20-ef3ead68607f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.613951317s
Jan  4 15:33:58.551: INFO: Pod "downwardapi-volume-8b48b400-3200-4d0f-9d20-ef3ead68607f": Phase="Pending", Reason="", readiness=false. Elapsed: 12.098295604s
Jan  4 15:34:00.564: INFO: Pod "downwardapi-volume-8b48b400-3200-4d0f-9d20-ef3ead68607f": Phase="Pending", Reason="", readiness=false. Elapsed: 14.111659216s
Jan  4 15:34:02.928: INFO: Pod "downwardapi-volume-8b48b400-3200-4d0f-9d20-ef3ead68607f": Phase="Pending", Reason="", readiness=false. Elapsed: 16.475940128s
Jan  4 15:34:04.952: INFO: Pod "downwardapi-volume-8b48b400-3200-4d0f-9d20-ef3ead68607f": Phase="Pending", Reason="", readiness=false. Elapsed: 18.499126305s
Jan  4 15:34:06.982: INFO: Pod "downwardapi-volume-8b48b400-3200-4d0f-9d20-ef3ead68607f": Phase="Pending", Reason="", readiness=false. Elapsed: 20.529189639s
Jan  4 15:34:08.989: INFO: Pod "downwardapi-volume-8b48b400-3200-4d0f-9d20-ef3ead68607f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.536414438s
STEP: Saw pod success
Jan  4 15:34:08.989: INFO: Pod "downwardapi-volume-8b48b400-3200-4d0f-9d20-ef3ead68607f" satisfied condition "success or failure"
Jan  4 15:34:08.991: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-8b48b400-3200-4d0f-9d20-ef3ead68607f container client-container: 
STEP: delete the pod
Jan  4 15:34:09.069: INFO: Waiting for pod downwardapi-volume-8b48b400-3200-4d0f-9d20-ef3ead68607f to disappear
Jan  4 15:34:09.090: INFO: Pod downwardapi-volume-8b48b400-3200-4d0f-9d20-ef3ead68607f no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 15:34:09.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-469" for this suite.
Jan  4 15:34:15.163: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 15:34:15.290: INFO: namespace downward-api-469 deletion completed in 6.195942891s

• [SLOW TEST:29.582 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 15:34:15.290: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test substitution in container's command
Jan  4 15:34:15.389: INFO: Waiting up to 5m0s for pod "var-expansion-58374274-f191-4a1e-bf5a-f779b5a1f74e" in namespace "var-expansion-6482" to be "success or failure"
Jan  4 15:34:15.403: INFO: Pod "var-expansion-58374274-f191-4a1e-bf5a-f779b5a1f74e": Phase="Pending", Reason="", readiness=false. Elapsed: 14.107368ms
Jan  4 15:34:17.413: INFO: Pod "var-expansion-58374274-f191-4a1e-bf5a-f779b5a1f74e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02459477s
Jan  4 15:34:19.427: INFO: Pod "var-expansion-58374274-f191-4a1e-bf5a-f779b5a1f74e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037717038s
Jan  4 15:34:21.439: INFO: Pod "var-expansion-58374274-f191-4a1e-bf5a-f779b5a1f74e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.049723183s
Jan  4 15:34:23.449: INFO: Pod "var-expansion-58374274-f191-4a1e-bf5a-f779b5a1f74e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.060570861s
STEP: Saw pod success
Jan  4 15:34:23.450: INFO: Pod "var-expansion-58374274-f191-4a1e-bf5a-f779b5a1f74e" satisfied condition "success or failure"
Jan  4 15:34:23.460: INFO: Trying to get logs from node iruya-node pod var-expansion-58374274-f191-4a1e-bf5a-f779b5a1f74e container dapi-container: 
STEP: delete the pod
Jan  4 15:34:23.510: INFO: Waiting for pod var-expansion-58374274-f191-4a1e-bf5a-f779b5a1f74e to disappear
Jan  4 15:34:23.597: INFO: Pod var-expansion-58374274-f191-4a1e-bf5a-f779b5a1f74e no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 15:34:23.598: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-6482" for this suite.
Jan  4 15:34:29.668: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 15:34:29.808: INFO: namespace var-expansion-6482 deletion completed in 6.196789176s

• [SLOW TEST:14.518 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 15:34:29.809: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan  4 15:34:30.059: INFO: Create a RollingUpdate DaemonSet
Jan  4 15:34:30.068: INFO: Check that daemon pods launch on every node of the cluster
Jan  4 15:34:30.111: INFO: Number of nodes with available pods: 0
Jan  4 15:34:30.111: INFO: Node iruya-node is running more than one daemon pod
Jan  4 15:34:31.123: INFO: Number of nodes with available pods: 0
Jan  4 15:34:31.123: INFO: Node iruya-node is running more than one daemon pod
Jan  4 15:34:32.308: INFO: Number of nodes with available pods: 0
Jan  4 15:34:32.309: INFO: Node iruya-node is running more than one daemon pod
Jan  4 15:34:33.628: INFO: Number of nodes with available pods: 0
Jan  4 15:34:33.628: INFO: Node iruya-node is running more than one daemon pod
Jan  4 15:34:34.503: INFO: Number of nodes with available pods: 0
Jan  4 15:34:34.504: INFO: Node iruya-node is running more than one daemon pod
Jan  4 15:34:35.125: INFO: Number of nodes with available pods: 0
Jan  4 15:34:35.125: INFO: Node iruya-node is running more than one daemon pod
Jan  4 15:34:36.128: INFO: Number of nodes with available pods: 0
Jan  4 15:34:36.128: INFO: Node iruya-node is running more than one daemon pod
Jan  4 15:34:38.117: INFO: Number of nodes with available pods: 0
Jan  4 15:34:38.117: INFO: Node iruya-node is running more than one daemon pod
Jan  4 15:34:39.943: INFO: Number of nodes with available pods: 0
Jan  4 15:34:39.943: INFO: Node iruya-node is running more than one daemon pod
Jan  4 15:34:40.434: INFO: Number of nodes with available pods: 0
Jan  4 15:34:40.434: INFO: Node iruya-node is running more than one daemon pod
Jan  4 15:34:41.186: INFO: Number of nodes with available pods: 0
Jan  4 15:34:41.186: INFO: Node iruya-node is running more than one daemon pod
Jan  4 15:34:42.127: INFO: Number of nodes with available pods: 0
Jan  4 15:34:42.127: INFO: Node iruya-node is running more than one daemon pod
Jan  4 15:34:43.130: INFO: Number of nodes with available pods: 2
Jan  4 15:34:43.130: INFO: Number of running nodes: 2, number of available pods: 2
Jan  4 15:34:43.130: INFO: Update the DaemonSet to trigger a rollout
Jan  4 15:34:43.145: INFO: Updating DaemonSet daemon-set
Jan  4 15:34:58.191: INFO: Roll back the DaemonSet before rollout is complete
Jan  4 15:34:58.199: INFO: Updating DaemonSet daemon-set
Jan  4 15:34:58.199: INFO: Make sure DaemonSet rollback is complete
Jan  4 15:34:58.207: INFO: Wrong image for pod: daemon-set-xjrvs. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Jan  4 15:34:58.207: INFO: Pod daemon-set-xjrvs is not available
Jan  4 15:34:59.375: INFO: Wrong image for pod: daemon-set-xjrvs. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Jan  4 15:34:59.375: INFO: Pod daemon-set-xjrvs is not available
Jan  4 15:35:00.487: INFO: Wrong image for pod: daemon-set-xjrvs. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Jan  4 15:35:00.487: INFO: Pod daemon-set-xjrvs is not available
Jan  4 15:35:01.221: INFO: Wrong image for pod: daemon-set-xjrvs. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Jan  4 15:35:01.221: INFO: Pod daemon-set-xjrvs is not available
Jan  4 15:35:02.233: INFO: Wrong image for pod: daemon-set-xjrvs. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Jan  4 15:35:02.233: INFO: Pod daemon-set-xjrvs is not available
Jan  4 15:35:03.259: INFO: Wrong image for pod: daemon-set-xjrvs. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Jan  4 15:35:03.259: INFO: Pod daemon-set-xjrvs is not available
Jan  4 15:35:04.499: INFO: Pod daemon-set-pl9mx is not available
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8858, will wait for the garbage collector to delete the pods
Jan  4 15:35:04.585: INFO: Deleting DaemonSet.extensions daemon-set took: 8.382197ms
Jan  4 15:35:06.085: INFO: Terminating DaemonSet.extensions daemon-set pods took: 1.500435318s
Jan  4 15:35:12.801: INFO: Number of nodes with available pods: 0
Jan  4 15:35:12.801: INFO: Number of running nodes: 0, number of available pods: 0
Jan  4 15:35:12.807: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8858/daemonsets","resourceVersion":"19288066"},"items":null}

Jan  4 15:35:12.809: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8858/pods","resourceVersion":"19288066"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 15:35:12.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-8858" for this suite.
Jan  4 15:35:18.914: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 15:35:19.041: INFO: namespace daemonsets-8858 deletion completed in 6.217842969s

• [SLOW TEST:49.232 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 15:35:19.041: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:167
[It] should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating server pod server in namespace prestop-4989
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace prestop-4989
STEP: Deleting pre-stop pod
Jan  4 15:35:42.415: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 15:35:42.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "prestop-4989" for this suite.
Jan  4 15:36:20.496: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 15:36:20.560: INFO: namespace prestop-4989 deletion completed in 38.104764748s

• [SLOW TEST:61.519 seconds]
[k8s.io] [sig-node] PreStop
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 15:36:20.560: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6839.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6839.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan  4 15:36:32.752: INFO: Unable to read wheezy_udp@kubernetes.default.svc.cluster.local from pod dns-6839/dns-test-0eca1045-8e76-4d3a-93a1-7e94635654c9: the server could not find the requested resource (get pods dns-test-0eca1045-8e76-4d3a-93a1-7e94635654c9)
Jan  4 15:36:32.766: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-6839/dns-test-0eca1045-8e76-4d3a-93a1-7e94635654c9: the server could not find the requested resource (get pods dns-test-0eca1045-8e76-4d3a-93a1-7e94635654c9)
Jan  4 15:36:32.774: INFO: Unable to read wheezy_udp@PodARecord from pod dns-6839/dns-test-0eca1045-8e76-4d3a-93a1-7e94635654c9: the server could not find the requested resource (get pods dns-test-0eca1045-8e76-4d3a-93a1-7e94635654c9)
Jan  4 15:36:32.782: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-6839/dns-test-0eca1045-8e76-4d3a-93a1-7e94635654c9: the server could not find the requested resource (get pods dns-test-0eca1045-8e76-4d3a-93a1-7e94635654c9)
Jan  4 15:36:32.814: INFO: Lookups using dns-6839/dns-test-0eca1045-8e76-4d3a-93a1-7e94635654c9 failed for: [wheezy_udp@kubernetes.default.svc.cluster.local wheezy_tcp@kubernetes.default.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord]

Jan  4 15:36:37.904: INFO: DNS probes using dns-6839/dns-test-0eca1045-8e76-4d3a-93a1-7e94635654c9 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 15:36:37.955: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-6839" for this suite.
Jan  4 15:36:44.023: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 15:36:44.157: INFO: namespace dns-6839 deletion completed in 6.155954053s

• [SLOW TEST:23.597 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 15:36:44.158: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-8460
[It] Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-8460
STEP: Creating statefulset with conflicting port in namespace statefulset-8460
STEP: Waiting until pod test-pod will start running in namespace statefulset-8460
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-8460
Jan  4 15:36:56.331: INFO: Observed stateful pod in namespace: statefulset-8460, name: ss-0, uid: 7da679c5-cbac-470c-9596-b54047b224af, status phase: Pending. Waiting for statefulset controller to delete.
Jan  4 15:36:56.491: INFO: Observed stateful pod in namespace: statefulset-8460, name: ss-0, uid: 7da679c5-cbac-470c-9596-b54047b224af, status phase: Failed. Waiting for statefulset controller to delete.
Jan  4 15:36:56.521: INFO: Observed stateful pod in namespace: statefulset-8460, name: ss-0, uid: 7da679c5-cbac-470c-9596-b54047b224af, status phase: Failed. Waiting for statefulset controller to delete.
Jan  4 15:36:56.529: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-8460
STEP: Removing pod with conflicting port in namespace statefulset-8460
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-8460 and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Jan  4 15:37:07.106: INFO: Deleting all statefulset in ns statefulset-8460
Jan  4 15:37:07.109: INFO: Scaling statefulset ss to 0
Jan  4 15:37:17.161: INFO: Waiting for statefulset status.replicas updated to 0
Jan  4 15:37:17.166: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 15:37:17.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-8460" for this suite.
Jan  4 15:37:23.230: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 15:37:23.337: INFO: namespace statefulset-8460 deletion completed in 6.139478593s

• [SLOW TEST:39.179 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Should recreate evicted statefulset [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 15:37:23.338: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test hostPath mode
Jan  4 15:37:23.564: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-2990" to be "success or failure"
Jan  4 15:37:23.574: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 10.421066ms
Jan  4 15:37:25.582: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0177707s
Jan  4 15:37:27.755: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.190620479s
Jan  4 15:37:29.760: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.19615251s
Jan  4 15:37:31.768: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.203526646s
Jan  4 15:37:33.778: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 10.214045859s
Jan  4 15:37:35.791: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.226660794s
STEP: Saw pod success
Jan  4 15:37:35.791: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
Jan  4 15:37:35.798: INFO: Trying to get logs from node iruya-node pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Jan  4 15:37:35.838: INFO: Waiting for pod pod-host-path-test to disappear
Jan  4 15:37:35.843: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 15:37:35.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "hostpath-2990" for this suite.
Jan  4 15:37:41.920: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 15:37:42.033: INFO: namespace hostpath-2990 deletion completed in 6.183794255s

• [SLOW TEST:18.696 seconds]
[sig-storage] HostPath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 15:37:42.034: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating service endpoint-test2 in namespace services-413
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-413 to expose endpoints map[]
Jan  4 15:37:42.308: INFO: Get endpoints failed (26.771141ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found
Jan  4 15:37:43.344: INFO: successfully validated that service endpoint-test2 in namespace services-413 exposes endpoints map[] (1.062864438s elapsed)
STEP: Creating pod pod1 in namespace services-413
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-413 to expose endpoints map[pod1:[80]]
Jan  4 15:37:47.495: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.140639917s elapsed, will retry)
Jan  4 15:37:52.565: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (9.210631704s elapsed, will retry)
Jan  4 15:37:55.685: INFO: successfully validated that service endpoint-test2 in namespace services-413 exposes endpoints map[pod1:[80]] (12.330340494s elapsed)
STEP: Creating pod pod2 in namespace services-413
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-413 to expose endpoints map[pod1:[80] pod2:[80]]
Jan  4 15:38:01.007: INFO: Unexpected endpoints: found map[6f43355f-784b-4012-9f62-67f189ebe07a:[80]], expected map[pod1:[80] pod2:[80]] (5.299620713s elapsed, will retry)
Jan  4 15:38:04.569: INFO: successfully validated that service endpoint-test2 in namespace services-413 exposes endpoints map[pod1:[80] pod2:[80]] (8.861431848s elapsed)
STEP: Deleting pod pod1 in namespace services-413
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-413 to expose endpoints map[pod2:[80]]
Jan  4 15:38:04.605: INFO: successfully validated that service endpoint-test2 in namespace services-413 exposes endpoints map[pod2:[80]] (31.404876ms elapsed)
STEP: Deleting pod pod2 in namespace services-413
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-413 to expose endpoints map[]
Jan  4 15:38:04.701: INFO: successfully validated that service endpoint-test2 in namespace services-413 exposes endpoints map[] (82.328457ms elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 15:38:04.758: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-413" for this suite.
Jan  4 15:38:28.852: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 15:38:28.963: INFO: namespace services-413 deletion completed in 24.194580982s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:46.929 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run default 
  should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 15:38:28.964: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1420
[It] should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan  4 15:38:29.046: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-4970'
Jan  4 15:38:32.250: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan  4 15:38:32.251: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created
[AfterEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1426
Jan  4 15:38:32.332: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-4970'
Jan  4 15:38:32.484: INFO: stderr: ""
Jan  4 15:38:32.484: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 15:38:32.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4970" for this suite.
Jan  4 15:38:54.548: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 15:38:54.618: INFO: namespace kubectl-4970 deletion completed in 22.105884066s

• [SLOW TEST:25.654 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create an rc or deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 15:38:54.618: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Jan  4 15:42:02.124: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  4 15:42:02.163: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  4 15:42:04.164: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  4 15:42:04.173: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  4 15:42:06.164: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  4 15:42:06.172: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  4 15:42:08.164: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  4 15:42:08.181: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  4 15:42:10.164: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  4 15:42:10.174: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  4 15:42:12.164: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  4 15:42:12.197: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  4 15:42:14.164: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  4 15:42:14.181: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  4 15:42:16.164: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  4 15:42:16.168: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  4 15:42:18.164: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  4 15:42:18.169: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  4 15:42:20.164: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  4 15:42:20.175: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  4 15:42:22.164: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  4 15:42:22.188: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  4 15:42:24.164: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  4 15:42:24.171: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  4 15:42:26.164: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  4 15:42:26.171: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  4 15:42:28.164: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  4 15:42:28.168: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  4 15:42:30.164: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  4 15:42:30.457: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  4 15:42:32.164: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  4 15:42:32.174: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  4 15:42:34.164: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  4 15:42:34.194: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  4 15:42:36.164: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  4 15:42:36.171: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  4 15:42:38.164: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  4 15:42:38.170: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  4 15:42:40.164: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  4 15:42:40.175: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  4 15:42:42.164: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  4 15:42:42.177: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  4 15:42:44.164: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  4 15:42:44.170: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  4 15:42:46.164: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  4 15:42:46.172: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  4 15:42:48.164: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  4 15:42:48.170: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  4 15:42:50.164: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  4 15:42:50.170: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  4 15:42:52.164: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  4 15:42:52.171: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  4 15:42:54.164: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  4 15:42:54.667: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  4 15:42:56.164: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  4 15:42:56.170: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  4 15:42:58.164: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  4 15:42:58.172: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  4 15:43:00.164: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  4 15:43:00.172: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  4 15:43:02.164: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  4 15:43:02.174: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  4 15:43:04.164: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  4 15:43:04.187: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  4 15:43:06.164: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  4 15:43:06.170: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  4 15:43:08.164: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  4 15:43:08.172: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  4 15:43:10.164: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  4 15:43:10.171: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  4 15:43:12.164: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  4 15:43:12.185: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  4 15:43:14.164: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  4 15:43:14.187: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  4 15:43:16.164: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  4 15:43:16.177: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  4 15:43:18.164: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  4 15:43:18.223: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  4 15:43:20.164: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  4 15:43:20.173: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  4 15:43:22.164: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  4 15:43:22.183: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  4 15:43:24.164: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  4 15:43:24.170: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  4 15:43:26.164: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  4 15:43:26.176: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  4 15:43:28.164: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  4 15:43:28.176: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  4 15:43:30.164: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  4 15:43:30.202: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  4 15:43:32.164: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  4 15:43:32.228: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  4 15:43:34.164: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  4 15:43:34.171: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  4 15:43:36.164: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  4 15:43:36.169: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  4 15:43:38.164: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  4 15:43:38.170: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  4 15:43:40.164: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  4 15:43:40.178: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  4 15:43:42.164: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  4 15:43:42.172: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  4 15:43:44.164: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  4 15:43:44.181: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 15:43:44.181: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-3000" for this suite.
Jan  4 15:44:06.211: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 15:44:06.360: INFO: namespace container-lifecycle-hook-3000 deletion completed in 22.174598131s

• [SLOW TEST:311.742 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 15:44:06.361: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Jan  4 15:44:06.691: INFO: Waiting up to 5m0s for pod "downward-api-d540645c-6acc-4f61-bb98-63e9974523b6" in namespace "downward-api-4183" to be "success or failure"
Jan  4 15:44:06.911: INFO: Pod "downward-api-d540645c-6acc-4f61-bb98-63e9974523b6": Phase="Pending", Reason="", readiness=false. Elapsed: 219.932831ms
Jan  4 15:44:11.958: INFO: Pod "downward-api-d540645c-6acc-4f61-bb98-63e9974523b6": Phase="Pending", Reason="", readiness=false. Elapsed: 5.266635892s
Jan  4 15:44:13.967: INFO: Pod "downward-api-d540645c-6acc-4f61-bb98-63e9974523b6": Phase="Pending", Reason="", readiness=false. Elapsed: 7.275797797s
Jan  4 15:44:15.979: INFO: Pod "downward-api-d540645c-6acc-4f61-bb98-63e9974523b6": Phase="Pending", Reason="", readiness=false. Elapsed: 9.287626547s
Jan  4 15:44:18.002: INFO: Pod "downward-api-d540645c-6acc-4f61-bb98-63e9974523b6": Phase="Pending", Reason="", readiness=false. Elapsed: 11.311345262s
Jan  4 15:44:20.013: INFO: Pod "downward-api-d540645c-6acc-4f61-bb98-63e9974523b6": Phase="Pending", Reason="", readiness=false. Elapsed: 13.321777718s
Jan  4 15:44:22.024: INFO: Pod "downward-api-d540645c-6acc-4f61-bb98-63e9974523b6": Phase="Pending", Reason="", readiness=false. Elapsed: 15.333070975s
Jan  4 15:44:24.099: INFO: Pod "downward-api-d540645c-6acc-4f61-bb98-63e9974523b6": Phase="Pending", Reason="", readiness=false. Elapsed: 17.407489183s
Jan  4 15:44:26.109: INFO: Pod "downward-api-d540645c-6acc-4f61-bb98-63e9974523b6": Phase="Pending", Reason="", readiness=false. Elapsed: 19.417382637s
Jan  4 15:44:28.116: INFO: Pod "downward-api-d540645c-6acc-4f61-bb98-63e9974523b6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 21.424964311s
STEP: Saw pod success
Jan  4 15:44:28.116: INFO: Pod "downward-api-d540645c-6acc-4f61-bb98-63e9974523b6" satisfied condition "success or failure"
Jan  4 15:44:28.123: INFO: Trying to get logs from node iruya-node pod downward-api-d540645c-6acc-4f61-bb98-63e9974523b6 container dapi-container: 
STEP: delete the pod
Jan  4 15:44:28.195: INFO: Waiting for pod downward-api-d540645c-6acc-4f61-bb98-63e9974523b6 to disappear
Jan  4 15:44:28.202: INFO: Pod downward-api-d540645c-6acc-4f61-bb98-63e9974523b6 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 15:44:28.203: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4183" for this suite.
Jan  4 15:44:34.353: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 15:44:34.427: INFO: namespace downward-api-4183 deletion completed in 6.21796285s

• [SLOW TEST:28.067 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 15:44:34.428: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a replication controller
Jan  4 15:44:34.505: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-661'
Jan  4 15:44:34.838: INFO: stderr: ""
Jan  4 15:44:34.838: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan  4 15:44:34.838: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-661'
Jan  4 15:44:35.220: INFO: stderr: ""
Jan  4 15:44:35.221: INFO: stdout: "update-demo-nautilus-4jvds update-demo-nautilus-gzfc9 "
Jan  4 15:44:35.221: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4jvds -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-661'
Jan  4 15:44:37.271: INFO: stderr: ""
Jan  4 15:44:37.271: INFO: stdout: ""
Jan  4 15:44:37.271: INFO: update-demo-nautilus-4jvds is created but not running
Jan  4 15:44:42.271: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-661'
Jan  4 15:44:43.187: INFO: stderr: ""
Jan  4 15:44:43.187: INFO: stdout: "update-demo-nautilus-4jvds update-demo-nautilus-gzfc9 "
Jan  4 15:44:43.188: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4jvds -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-661'
Jan  4 15:44:43.398: INFO: stderr: ""
Jan  4 15:44:43.398: INFO: stdout: ""
Jan  4 15:44:43.399: INFO: update-demo-nautilus-4jvds is created but not running
Jan  4 15:44:48.399: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-661'
Jan  4 15:44:50.023: INFO: stderr: ""
Jan  4 15:44:50.023: INFO: stdout: "update-demo-nautilus-4jvds update-demo-nautilus-gzfc9 "
Jan  4 15:44:50.023: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4jvds -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-661'
Jan  4 15:44:50.490: INFO: stderr: ""
Jan  4 15:44:50.491: INFO: stdout: ""
Jan  4 15:44:50.491: INFO: update-demo-nautilus-4jvds is created but not running
Jan  4 15:44:55.492: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-661'
Jan  4 15:44:55.727: INFO: stderr: ""
Jan  4 15:44:55.727: INFO: stdout: "update-demo-nautilus-4jvds update-demo-nautilus-gzfc9 "
Jan  4 15:44:55.728: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4jvds -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-661'
Jan  4 15:44:55.857: INFO: stderr: ""
Jan  4 15:44:55.857: INFO: stdout: "true"
Jan  4 15:44:55.858: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4jvds -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-661'
Jan  4 15:44:56.001: INFO: stderr: ""
Jan  4 15:44:56.001: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  4 15:44:56.001: INFO: validating pod update-demo-nautilus-4jvds
Jan  4 15:44:56.012: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  4 15:44:56.012: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  4 15:44:56.012: INFO: update-demo-nautilus-4jvds is verified up and running
Jan  4 15:44:56.012: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gzfc9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-661'
Jan  4 15:44:56.076: INFO: stderr: ""
Jan  4 15:44:56.076: INFO: stdout: "true"
Jan  4 15:44:56.077: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gzfc9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-661'
Jan  4 15:44:56.187: INFO: stderr: ""
Jan  4 15:44:56.187: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  4 15:44:56.187: INFO: validating pod update-demo-nautilus-gzfc9
Jan  4 15:44:56.338: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  4 15:44:56.338: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  4 15:44:56.338: INFO: update-demo-nautilus-gzfc9 is verified up and running
STEP: using delete to clean up resources
Jan  4 15:44:56.339: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-661'
Jan  4 15:44:56.417: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  4 15:44:56.417: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Jan  4 15:44:56.417: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-661'
Jan  4 15:44:56.521: INFO: stderr: "No resources found.\n"
Jan  4 15:44:56.521: INFO: stdout: ""
Jan  4 15:44:56.522: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-661 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan  4 15:44:56.634: INFO: stderr: ""
Jan  4 15:44:56.635: INFO: stdout: "update-demo-nautilus-4jvds\nupdate-demo-nautilus-gzfc9\n"
Jan  4 15:44:57.135: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-661'
Jan  4 15:44:58.445: INFO: stderr: "No resources found.\n"
Jan  4 15:44:58.445: INFO: stdout: ""
Jan  4 15:44:58.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-661 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan  4 15:44:58.837: INFO: stderr: ""
Jan  4 15:44:58.838: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 15:44:58.838: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-661" for this suite.
Jan  4 15:45:07.031: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 15:45:07.107: INFO: namespace kubectl-661 deletion completed in 8.220625323s

• [SLOW TEST:32.679 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create and stop a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 15:45:07.107: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 15:45:21.364: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-8554" for this suite.
Jan  4 15:46:13.395: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 15:46:13.489: INFO: namespace kubelet-test-8554 deletion completed in 52.118914284s

• [SLOW TEST:66.382 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox Pod with hostAliases
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136
    should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 15:46:13.490: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan  4 15:46:13.601: INFO: Waiting up to 5m0s for pod "downwardapi-volume-53d3780e-1583-46d4-8ecf-cf2b2692e5a4" in namespace "projected-3078" to be "success or failure"
Jan  4 15:46:13.606: INFO: Pod "downwardapi-volume-53d3780e-1583-46d4-8ecf-cf2b2692e5a4": Phase="Pending", Reason="", readiness=false. Elapsed: 3.913807ms
Jan  4 15:46:15.616: INFO: Pod "downwardapi-volume-53d3780e-1583-46d4-8ecf-cf2b2692e5a4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014461164s
Jan  4 15:46:17.628: INFO: Pod "downwardapi-volume-53d3780e-1583-46d4-8ecf-cf2b2692e5a4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026393451s
Jan  4 15:46:19.635: INFO: Pod "downwardapi-volume-53d3780e-1583-46d4-8ecf-cf2b2692e5a4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.033638024s
Jan  4 15:46:21.642: INFO: Pod "downwardapi-volume-53d3780e-1583-46d4-8ecf-cf2b2692e5a4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.040617818s
Jan  4 15:46:23.661: INFO: Pod "downwardapi-volume-53d3780e-1583-46d4-8ecf-cf2b2692e5a4": Phase="Pending", Reason="", readiness=false. Elapsed: 10.059889028s
Jan  4 15:46:25.667: INFO: Pod "downwardapi-volume-53d3780e-1583-46d4-8ecf-cf2b2692e5a4": Phase="Pending", Reason="", readiness=false. Elapsed: 12.065380907s
Jan  4 15:46:27.675: INFO: Pod "downwardapi-volume-53d3780e-1583-46d4-8ecf-cf2b2692e5a4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.073094327s
STEP: Saw pod success
Jan  4 15:46:27.675: INFO: Pod "downwardapi-volume-53d3780e-1583-46d4-8ecf-cf2b2692e5a4" satisfied condition "success or failure"
Jan  4 15:46:27.680: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-53d3780e-1583-46d4-8ecf-cf2b2692e5a4 container client-container: 
STEP: delete the pod
Jan  4 15:46:27.735: INFO: Waiting for pod downwardapi-volume-53d3780e-1583-46d4-8ecf-cf2b2692e5a4 to disappear
Jan  4 15:46:27.739: INFO: Pod downwardapi-volume-53d3780e-1583-46d4-8ecf-cf2b2692e5a4 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 15:46:27.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3078" for this suite.
Jan  4 15:46:33.789: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 15:46:33.898: INFO: namespace projected-3078 deletion completed in 6.153305543s

• [SLOW TEST:20.408 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 15:46:33.898: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap configmap-6584/configmap-test-807a301c-cba3-4c82-bc4d-d635405c20f3
STEP: Creating a pod to test consume configMaps
Jan  4 15:46:35.329: INFO: Waiting up to 5m0s for pod "pod-configmaps-cade9c05-3a13-4bf9-8546-5e03003f2bb1" in namespace "configmap-6584" to be "success or failure"
Jan  4 15:46:35.351: INFO: Pod "pod-configmaps-cade9c05-3a13-4bf9-8546-5e03003f2bb1": Phase="Pending", Reason="", readiness=false. Elapsed: 21.934447ms
Jan  4 15:46:37.367: INFO: Pod "pod-configmaps-cade9c05-3a13-4bf9-8546-5e03003f2bb1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037870006s
Jan  4 15:46:39.390: INFO: Pod "pod-configmaps-cade9c05-3a13-4bf9-8546-5e03003f2bb1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.060320701s
Jan  4 15:46:41.423: INFO: Pod "pod-configmaps-cade9c05-3a13-4bf9-8546-5e03003f2bb1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.09399378s
Jan  4 15:46:44.513: INFO: Pod "pod-configmaps-cade9c05-3a13-4bf9-8546-5e03003f2bb1": Phase="Pending", Reason="", readiness=false. Elapsed: 9.183376775s
Jan  4 15:46:46.523: INFO: Pod "pod-configmaps-cade9c05-3a13-4bf9-8546-5e03003f2bb1": Phase="Pending", Reason="", readiness=false. Elapsed: 11.193977401s
Jan  4 15:46:48.534: INFO: Pod "pod-configmaps-cade9c05-3a13-4bf9-8546-5e03003f2bb1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.205006601s
STEP: Saw pod success
Jan  4 15:46:48.535: INFO: Pod "pod-configmaps-cade9c05-3a13-4bf9-8546-5e03003f2bb1" satisfied condition "success or failure"
Jan  4 15:46:48.539: INFO: Trying to get logs from node iruya-node pod pod-configmaps-cade9c05-3a13-4bf9-8546-5e03003f2bb1 container env-test: 
STEP: delete the pod
Jan  4 15:46:48.724: INFO: Waiting for pod pod-configmaps-cade9c05-3a13-4bf9-8546-5e03003f2bb1 to disappear
Jan  4 15:46:48.734: INFO: Pod pod-configmaps-cade9c05-3a13-4bf9-8546-5e03003f2bb1 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 15:46:48.735: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6584" for this suite.
Jan  4 15:46:54.772: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 15:46:54.924: INFO: namespace configmap-6584 deletion completed in 6.182245748s

• [SLOW TEST:21.026 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 15:46:54.925: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Jan  4 15:47:05.709: INFO: Successfully updated pod "pod-update-activedeadlineseconds-01093f2a-18eb-4ef9-9224-a4a293f6bbf7"
Jan  4 15:47:05.709: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-01093f2a-18eb-4ef9-9224-a4a293f6bbf7" in namespace "pods-1915" to be "terminated due to deadline exceeded"
Jan  4 15:47:05.733: INFO: Pod "pod-update-activedeadlineseconds-01093f2a-18eb-4ef9-9224-a4a293f6bbf7": Phase="Running", Reason="", readiness=true. Elapsed: 23.283771ms
Jan  4 15:47:07.742: INFO: Pod "pod-update-activedeadlineseconds-01093f2a-18eb-4ef9-9224-a4a293f6bbf7": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.032179475s
Jan  4 15:47:07.742: INFO: Pod "pod-update-activedeadlineseconds-01093f2a-18eb-4ef9-9224-a4a293f6bbf7" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 15:47:07.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-1915" for this suite.
Jan  4 15:47:13.806: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 15:47:14.086: INFO: namespace pods-1915 deletion completed in 6.335616614s

• [SLOW TEST:19.162 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 15:47:14.087: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Jan  4 15:47:24.280: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-2efc0a5d-71a3-46e4-94e0-f336c33b0aef,GenerateName:,Namespace:events-77,SelfLink:/api/v1/namespaces/events-77/pods/send-events-2efc0a5d-71a3-46e4-94e0-f336c33b0aef,UID:ec171fd0-4c2f-4084-b09e-145e242115d6,ResourceVersion:19289605,Generation:0,CreationTimestamp:2020-01-04 15:47:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 208693217,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-ncmft {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ncmft,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] []  [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-ncmft true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002b0f930} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002b0f950}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 15:47:14 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 15:47:23 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 15:47:23 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 15:47:14 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2020-01-04 15:47:14 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-01-04 15:47:21 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 docker-pullable://gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 docker://ea70df8625a44bb76c5dc2527c343aca650e64a2e853b58bea5f3dabfcadcfc1}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}

STEP: checking for scheduler event about the pod
Jan  4 15:47:26.295: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Jan  4 15:47:28.302: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 15:47:28.323: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-77" for this suite.
Jan  4 15:48:06.372: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 15:48:06.456: INFO: namespace events-77 deletion completed in 38.123601195s

• [SLOW TEST:52.369 seconds]
[k8s.io] [sig-node] Events
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 15:48:06.457: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Jan  4 15:48:15.392: INFO: Successfully updated pod "pod-update-5c496311-5a68-4e99-808f-4c1a2a90300a"
STEP: verifying the updated pod is in kubernetes
Jan  4 15:48:15.398: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 15:48:15.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-7889" for this suite.
Jan  4 15:48:37.430: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 15:48:37.520: INFO: namespace pods-7889 deletion completed in 22.116619634s

• [SLOW TEST:31.063 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl version 
  should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 15:48:37.520: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan  4 15:48:37.634: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version'
Jan  4 15:48:37.786: INFO: stderr: ""
Jan  4 15:48:37.786: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.7\", GitCommit:\"6c143d35bb11d74970e7bc0b6c45b6bfdffc0bd4\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T16:55:20Z\", GoVersion:\"go1.12.14\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.1\", GitCommit:\"4485c6f18cee9a5d3c3b4e523bd27972b1b53892\", GitTreeState:\"clean\", BuildDate:\"2019-07-18T09:09:21Z\", GoVersion:\"go1.12.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 15:48:37.786: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3416" for this suite.
Jan  4 15:48:43.832: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 15:48:43.995: INFO: namespace kubectl-3416 deletion completed in 6.203462315s

• [SLOW TEST:6.475 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl version
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check is all data is printed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 15:48:43.995: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svc-latency
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating replication controller svc-latency-rc in namespace svc-latency-2540
I0104 15:48:44.090125       8 runners.go:180] Created replication controller with name: svc-latency-rc, namespace: svc-latency-2540, replica count: 1
I0104 15:48:45.140903       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0104 15:48:46.141364       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0104 15:48:47.141874       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0104 15:48:48.142258       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0104 15:48:49.142486       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0104 15:48:50.142762       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0104 15:48:51.143095       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0104 15:48:52.144169       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0104 15:48:53.145142       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jan  4 15:48:53.291: INFO: Created: latency-svc-78rpj
Jan  4 15:48:53.307: INFO: Got endpoints: latency-svc-78rpj [61.289706ms]
Jan  4 15:48:53.415: INFO: Created: latency-svc-sbd9p
Jan  4 15:48:53.464: INFO: Got endpoints: latency-svc-sbd9p [156.024706ms]
Jan  4 15:48:53.472: INFO: Created: latency-svc-nm9k2
Jan  4 15:48:53.483: INFO: Got endpoints: latency-svc-nm9k2 [173.287766ms]
Jan  4 15:48:53.603: INFO: Created: latency-svc-z4pml
Jan  4 15:48:53.610: INFO: Got endpoints: latency-svc-z4pml [300.806979ms]
Jan  4 15:48:53.679: INFO: Created: latency-svc-dxnbx
Jan  4 15:48:53.684: INFO: Got endpoints: latency-svc-dxnbx [375.092012ms]
Jan  4 15:48:53.756: INFO: Created: latency-svc-llg67
Jan  4 15:48:53.771: INFO: Got endpoints: latency-svc-llg67 [461.598902ms]
Jan  4 15:48:53.818: INFO: Created: latency-svc-h8l6h
Jan  4 15:48:53.874: INFO: Got endpoints: latency-svc-h8l6h [564.321817ms]
Jan  4 15:48:53.960: INFO: Created: latency-svc-jl48t
Jan  4 15:48:54.030: INFO: Got endpoints: latency-svc-jl48t [720.405534ms]
Jan  4 15:48:54.057: INFO: Created: latency-svc-w2rpf
Jan  4 15:48:54.067: INFO: Got endpoints: latency-svc-w2rpf [757.078876ms]
Jan  4 15:48:54.207: INFO: Created: latency-svc-mgrmx
Jan  4 15:48:54.253: INFO: Got endpoints: latency-svc-mgrmx [943.529519ms]
Jan  4 15:48:54.267: INFO: Created: latency-svc-25j7g
Jan  4 15:48:54.278: INFO: Got endpoints: latency-svc-25j7g [968.753509ms]
Jan  4 15:48:54.351: INFO: Created: latency-svc-xdxqm
Jan  4 15:48:54.393: INFO: Created: latency-svc-trdwv
Jan  4 15:48:54.393: INFO: Got endpoints: latency-svc-xdxqm [1.083627847s]
Jan  4 15:48:54.423: INFO: Got endpoints: latency-svc-trdwv [1.11364905s]
Jan  4 15:48:54.567: INFO: Created: latency-svc-7x9jn
Jan  4 15:48:54.639: INFO: Got endpoints: latency-svc-7x9jn [1.329259157s]
Jan  4 15:48:54.648: INFO: Created: latency-svc-x428n
Jan  4 15:48:54.652: INFO: Got endpoints: latency-svc-x428n [1.342878083s]
Jan  4 15:48:54.725: INFO: Created: latency-svc-sscbd
Jan  4 15:48:54.727: INFO: Got endpoints: latency-svc-sscbd [1.418150968s]
Jan  4 15:48:54.764: INFO: Created: latency-svc-8ztdx
Jan  4 15:48:54.767: INFO: Got endpoints: latency-svc-8ztdx [1.302738864s]
Jan  4 15:48:54.789: INFO: Created: latency-svc-9cs5d
Jan  4 15:48:54.866: INFO: Got endpoints: latency-svc-9cs5d [1.382400049s]
Jan  4 15:48:54.882: INFO: Created: latency-svc-shs6d
Jan  4 15:48:54.920: INFO: Got endpoints: latency-svc-shs6d [1.309287875s]
Jan  4 15:48:54.930: INFO: Created: latency-svc-vlq7n
Jan  4 15:48:54.959: INFO: Got endpoints: latency-svc-vlq7n [1.27490307s]
Jan  4 15:48:55.047: INFO: Created: latency-svc-lgpmc
Jan  4 15:48:55.055: INFO: Got endpoints: latency-svc-lgpmc [1.283630889s]
Jan  4 15:48:55.113: INFO: Created: latency-svc-v765j
Jan  4 15:48:55.178: INFO: Got endpoints: latency-svc-v765j [1.303170299s]
Jan  4 15:48:55.216: INFO: Created: latency-svc-jbnfh
Jan  4 15:48:55.231: INFO: Got endpoints: latency-svc-jbnfh [1.200173028s]
Jan  4 15:48:55.332: INFO: Created: latency-svc-xvlfk
Jan  4 15:48:55.340: INFO: Got endpoints: latency-svc-xvlfk [1.273254893s]
Jan  4 15:48:55.405: INFO: Created: latency-svc-wqx8g
Jan  4 15:48:55.416: INFO: Got endpoints: latency-svc-wqx8g [185.435515ms]
Jan  4 15:48:56.190: INFO: Created: latency-svc-28spm
Jan  4 15:48:56.191: INFO: Got endpoints: latency-svc-28spm [1.937839227s]
Jan  4 15:48:56.357: INFO: Created: latency-svc-hm8lz
Jan  4 15:48:56.369: INFO: Got endpoints: latency-svc-hm8lz [2.091182269s]
Jan  4 15:48:56.408: INFO: Created: latency-svc-pdw9h
Jan  4 15:48:56.418: INFO: Got endpoints: latency-svc-pdw9h [2.024218807s]
Jan  4 15:48:56.678: INFO: Created: latency-svc-s8kd8
Jan  4 15:48:56.701: INFO: Got endpoints: latency-svc-s8kd8 [2.277669274s]
Jan  4 15:48:56.826: INFO: Created: latency-svc-6cnlg
Jan  4 15:48:56.835: INFO: Got endpoints: latency-svc-6cnlg [2.196262669s]
Jan  4 15:48:56.886: INFO: Created: latency-svc-qjhcf
Jan  4 15:48:56.891: INFO: Got endpoints: latency-svc-qjhcf [2.238599674s]
Jan  4 15:48:56.997: INFO: Created: latency-svc-7mqng
Jan  4 15:48:56.999: INFO: Got endpoints: latency-svc-7mqng [2.27199655s]
Jan  4 15:48:57.059: INFO: Created: latency-svc-r96mp
Jan  4 15:48:57.068: INFO: Got endpoints: latency-svc-r96mp [2.300376215s]
Jan  4 15:48:57.163: INFO: Created: latency-svc-762nk
Jan  4 15:48:57.172: INFO: Got endpoints: latency-svc-762nk [2.306056077s]
Jan  4 15:48:57.234: INFO: Created: latency-svc-wg2v9
Jan  4 15:48:57.248: INFO: Got endpoints: latency-svc-wg2v9 [2.327023411s]
Jan  4 15:48:57.359: INFO: Created: latency-svc-85hwh
Jan  4 15:48:57.372: INFO: Got endpoints: latency-svc-85hwh [2.412624796s]
Jan  4 15:48:57.438: INFO: Created: latency-svc-sp4lf
Jan  4 15:48:57.439: INFO: Got endpoints: latency-svc-sp4lf [2.384095328s]
Jan  4 15:48:57.542: INFO: Created: latency-svc-v5npf
Jan  4 15:48:57.559: INFO: Got endpoints: latency-svc-v5npf [2.38074571s]
Jan  4 15:48:57.598: INFO: Created: latency-svc-8frvc
Jan  4 15:48:57.650: INFO: Got endpoints: latency-svc-8frvc [2.309293177s]
Jan  4 15:48:57.680: INFO: Created: latency-svc-9d2wx
Jan  4 15:48:57.688: INFO: Got endpoints: latency-svc-9d2wx [2.271370435s]
Jan  4 15:48:57.727: INFO: Created: latency-svc-r6t8w
Jan  4 15:48:57.733: INFO: Got endpoints: latency-svc-r6t8w [1.541389759s]
Jan  4 15:48:57.843: INFO: Created: latency-svc-99zg5
Jan  4 15:48:57.854: INFO: Got endpoints: latency-svc-99zg5 [1.484668617s]
Jan  4 15:48:57.925: INFO: Created: latency-svc-bsr5b
Jan  4 15:48:57.997: INFO: Got endpoints: latency-svc-bsr5b [1.578942656s]
Jan  4 15:48:58.024: INFO: Created: latency-svc-rx89r
Jan  4 15:48:58.042: INFO: Got endpoints: latency-svc-rx89r [1.341039288s]
Jan  4 15:48:58.088: INFO: Created: latency-svc-tqdx2
Jan  4 15:48:58.097: INFO: Got endpoints: latency-svc-tqdx2 [1.261404758s]
Jan  4 15:48:58.305: INFO: Created: latency-svc-5zkk6
Jan  4 15:48:58.444: INFO: Got endpoints: latency-svc-5zkk6 [1.552745024s]
Jan  4 15:48:58.446: INFO: Created: latency-svc-6gv9b
Jan  4 15:48:58.466: INFO: Got endpoints: latency-svc-6gv9b [1.466784172s]
Jan  4 15:48:58.618: INFO: Created: latency-svc-bsw9v
Jan  4 15:48:58.666: INFO: Got endpoints: latency-svc-bsw9v [1.597914713s]
Jan  4 15:48:58.803: INFO: Created: latency-svc-nlzdr
Jan  4 15:48:58.820: INFO: Got endpoints: latency-svc-nlzdr [1.647449329s]
Jan  4 15:48:58.886: INFO: Created: latency-svc-l56td
Jan  4 15:48:58.887: INFO: Got endpoints: latency-svc-l56td [1.638646913s]
Jan  4 15:48:58.977: INFO: Created: latency-svc-r5nkg
Jan  4 15:48:58.994: INFO: Got endpoints: latency-svc-r5nkg [1.621754773s]
Jan  4 15:48:59.041: INFO: Created: latency-svc-45p5n
Jan  4 15:48:59.044: INFO: Got endpoints: latency-svc-45p5n [1.604061015s]
Jan  4 15:48:59.142: INFO: Created: latency-svc-7dvdr
Jan  4 15:48:59.150: INFO: Got endpoints: latency-svc-7dvdr [1.590883062s]
Jan  4 15:48:59.215: INFO: Created: latency-svc-mvz2r
Jan  4 15:48:59.295: INFO: Got endpoints: latency-svc-mvz2r [1.645394383s]
Jan  4 15:48:59.298: INFO: Created: latency-svc-b5drq
Jan  4 15:48:59.308: INFO: Got endpoints: latency-svc-b5drq [1.61991059s]
Jan  4 15:48:59.361: INFO: Created: latency-svc-7m7h7
Jan  4 15:48:59.428: INFO: Got endpoints: latency-svc-7m7h7 [1.695484857s]
Jan  4 15:48:59.459: INFO: Created: latency-svc-97cx4
Jan  4 15:48:59.462: INFO: Got endpoints: latency-svc-97cx4 [1.606893101s]
Jan  4 15:48:59.503: INFO: Created: latency-svc-8b9h9
Jan  4 15:48:59.511: INFO: Got endpoints: latency-svc-8b9h9 [1.513495749s]
Jan  4 15:48:59.602: INFO: Created: latency-svc-jr7st
Jan  4 15:48:59.614: INFO: Got endpoints: latency-svc-jr7st [1.572002672s]
Jan  4 15:48:59.681: INFO: Created: latency-svc-7mm22
Jan  4 15:48:59.693: INFO: Got endpoints: latency-svc-7mm22 [1.595797057s]
Jan  4 15:48:59.764: INFO: Created: latency-svc-phsxc
Jan  4 15:48:59.779: INFO: Got endpoints: latency-svc-phsxc [1.334540389s]
Jan  4 15:48:59.823: INFO: Created: latency-svc-t2fh8
Jan  4 15:48:59.824: INFO: Got endpoints: latency-svc-t2fh8 [1.356402673s]
Jan  4 15:48:59.929: INFO: Created: latency-svc-jk94q
Jan  4 15:48:59.935: INFO: Got endpoints: latency-svc-jk94q [1.268583291s]
Jan  4 15:48:59.999: INFO: Created: latency-svc-gvgf6
Jan  4 15:49:00.003: INFO: Got endpoints: latency-svc-gvgf6 [1.18272446s]
Jan  4 15:49:00.106: INFO: Created: latency-svc-vllpk
Jan  4 15:49:00.106: INFO: Got endpoints: latency-svc-vllpk [1.219552462s]
Jan  4 15:49:00.138: INFO: Created: latency-svc-64q9v
Jan  4 15:49:00.146: INFO: Got endpoints: latency-svc-64q9v [1.151111689s]
Jan  4 15:49:00.308: INFO: Created: latency-svc-hqfqs
Jan  4 15:49:00.324: INFO: Got endpoints: latency-svc-hqfqs [1.279630538s]
Jan  4 15:49:00.479: INFO: Created: latency-svc-jnp5c
Jan  4 15:49:00.517: INFO: Got endpoints: latency-svc-jnp5c [1.366806031s]
Jan  4 15:49:00.666: INFO: Created: latency-svc-cxgmf
Jan  4 15:49:00.733: INFO: Got endpoints: latency-svc-cxgmf [1.437927828s]
Jan  4 15:49:00.744: INFO: Created: latency-svc-bhsqb
Jan  4 15:49:00.830: INFO: Got endpoints: latency-svc-bhsqb [1.521613255s]
Jan  4 15:49:00.899: INFO: Created: latency-svc-7nxgh
Jan  4 15:49:00.909: INFO: Got endpoints: latency-svc-7nxgh [1.480456403s]
Jan  4 15:49:01.049: INFO: Created: latency-svc-cs956
Jan  4 15:49:01.053: INFO: Got endpoints: latency-svc-cs956 [1.591549834s]
Jan  4 15:49:01.123: INFO: Created: latency-svc-c7jd8
Jan  4 15:49:01.123: INFO: Got endpoints: latency-svc-c7jd8 [1.612537934s]
Jan  4 15:49:01.204: INFO: Created: latency-svc-vrj49
Jan  4 15:49:01.214: INFO: Got endpoints: latency-svc-vrj49 [1.599147009s]
Jan  4 15:49:01.255: INFO: Created: latency-svc-rwgnh
Jan  4 15:49:01.265: INFO: Got endpoints: latency-svc-rwgnh [1.571089594s]
Jan  4 15:49:01.372: INFO: Created: latency-svc-v2l6b
Jan  4 15:49:01.416: INFO: Created: latency-svc-8gsnd
Jan  4 15:49:01.418: INFO: Got endpoints: latency-svc-v2l6b [1.638283247s]
Jan  4 15:49:01.445: INFO: Got endpoints: latency-svc-8gsnd [1.620858853s]
Jan  4 15:49:01.596: INFO: Created: latency-svc-jwtz5
Jan  4 15:49:01.603: INFO: Got endpoints: latency-svc-jwtz5 [1.668293365s]
Jan  4 15:49:01.647: INFO: Created: latency-svc-zhwxn
Jan  4 15:49:01.652: INFO: Got endpoints: latency-svc-zhwxn [1.648443404s]
Jan  4 15:49:01.791: INFO: Created: latency-svc-57cnc
Jan  4 15:49:01.808: INFO: Got endpoints: latency-svc-57cnc [1.701187927s]
Jan  4 15:49:01.857: INFO: Created: latency-svc-zw2x6
Jan  4 15:49:01.878: INFO: Got endpoints: latency-svc-zw2x6 [1.732795533s]
Jan  4 15:49:01.968: INFO: Created: latency-svc-hbskl
Jan  4 15:49:01.978: INFO: Got endpoints: latency-svc-hbskl [1.653852672s]
Jan  4 15:49:02.028: INFO: Created: latency-svc-vg429
Jan  4 15:49:02.141: INFO: Got endpoints: latency-svc-vg429 [1.624191822s]
Jan  4 15:49:02.143: INFO: Created: latency-svc-l6clw
Jan  4 15:49:02.180: INFO: Got endpoints: latency-svc-l6clw [1.44575382s]
Jan  4 15:49:02.241: INFO: Created: latency-svc-2r5cd
Jan  4 15:49:02.320: INFO: Got endpoints: latency-svc-2r5cd [1.489211818s]
Jan  4 15:49:02.324: INFO: Created: latency-svc-bhm98
Jan  4 15:49:02.333: INFO: Got endpoints: latency-svc-bhm98 [1.423955033s]
Jan  4 15:49:02.377: INFO: Created: latency-svc-n4bfx
Jan  4 15:49:02.471: INFO: Got endpoints: latency-svc-n4bfx [1.417093746s]
Jan  4 15:49:02.472: INFO: Created: latency-svc-wztn4
Jan  4 15:49:02.479: INFO: Got endpoints: latency-svc-wztn4 [1.355860545s]
Jan  4 15:49:02.546: INFO: Created: latency-svc-wbqxj
Jan  4 15:49:02.555: INFO: Got endpoints: latency-svc-wbqxj [1.340779295s]
Jan  4 15:49:02.738: INFO: Created: latency-svc-6mkcb
Jan  4 15:49:02.810: INFO: Got endpoints: latency-svc-6mkcb [1.545564327s]
Jan  4 15:49:02.811: INFO: Created: latency-svc-hvt2p
Jan  4 15:49:02.822: INFO: Got endpoints: latency-svc-hvt2p [1.404231759s]
Jan  4 15:49:02.933: INFO: Created: latency-svc-vzfqc
Jan  4 15:49:02.966: INFO: Got endpoints: latency-svc-vzfqc [1.520785491s]
Jan  4 15:49:02.978: INFO: Created: latency-svc-whhl8
Jan  4 15:49:02.978: INFO: Got endpoints: latency-svc-whhl8 [1.374883861s]
Jan  4 15:49:03.104: INFO: Created: latency-svc-pbsp8
Jan  4 15:49:03.114: INFO: Got endpoints: latency-svc-pbsp8 [1.461631174s]
Jan  4 15:49:03.186: INFO: Created: latency-svc-4blrw
Jan  4 15:49:03.268: INFO: Got endpoints: latency-svc-4blrw [1.460389214s]
Jan  4 15:49:03.289: INFO: Created: latency-svc-29tsk
Jan  4 15:49:03.290: INFO: Got endpoints: latency-svc-29tsk [1.411088638s]
Jan  4 15:49:03.367: INFO: Created: latency-svc-s2pfz
Jan  4 15:49:03.482: INFO: Got endpoints: latency-svc-s2pfz [1.504085885s]
Jan  4 15:49:03.518: INFO: Created: latency-svc-6dcqh
Jan  4 15:49:03.532: INFO: Got endpoints: latency-svc-6dcqh [1.390041777s]
Jan  4 15:49:03.661: INFO: Created: latency-svc-zblnj
Jan  4 15:49:03.675: INFO: Got endpoints: latency-svc-zblnj [1.495478389s]
Jan  4 15:49:03.717: INFO: Created: latency-svc-x9rmn
Jan  4 15:49:03.838: INFO: Got endpoints: latency-svc-x9rmn [1.517298118s]
Jan  4 15:49:03.846: INFO: Created: latency-svc-pzhp2
Jan  4 15:49:03.869: INFO: Got endpoints: latency-svc-pzhp2 [1.53523387s]
Jan  4 15:49:03.943: INFO: Created: latency-svc-pmkr8
Jan  4 15:49:04.037: INFO: Got endpoints: latency-svc-pmkr8 [1.565540696s]
Jan  4 15:49:04.078: INFO: Created: latency-svc-lbj25
Jan  4 15:49:04.098: INFO: Got endpoints: latency-svc-lbj25 [1.618695675s]
Jan  4 15:49:04.218: INFO: Created: latency-svc-zptl2
Jan  4 15:49:04.223: INFO: Got endpoints: latency-svc-zptl2 [1.66759636s]
Jan  4 15:49:04.273: INFO: Created: latency-svc-2fp6p
Jan  4 15:49:04.285: INFO: Got endpoints: latency-svc-2fp6p [1.473831498s]
Jan  4 15:49:04.448: INFO: Created: latency-svc-7mhcm
Jan  4 15:49:04.448: INFO: Got endpoints: latency-svc-7mhcm [1.625481268s]
Jan  4 15:49:04.609: INFO: Created: latency-svc-nzzsb
Jan  4 15:49:04.609: INFO: Got endpoints: latency-svc-nzzsb [1.64303368s]
Jan  4 15:49:04.825: INFO: Created: latency-svc-hjb7d
Jan  4 15:49:04.894: INFO: Got endpoints: latency-svc-hjb7d [1.915994181s]
Jan  4 15:49:04.904: INFO: Created: latency-svc-v86mb
Jan  4 15:49:04.924: INFO: Got endpoints: latency-svc-v86mb [1.809687051s]
Jan  4 15:49:05.012: INFO: Created: latency-svc-dlsph
Jan  4 15:49:05.024: INFO: Got endpoints: latency-svc-dlsph [1.755273414s]
Jan  4 15:49:05.061: INFO: Created: latency-svc-tbgpb
Jan  4 15:49:05.204: INFO: Created: latency-svc-pc97v
Jan  4 15:49:05.204: INFO: Got endpoints: latency-svc-tbgpb [1.913766453s]
Jan  4 15:49:05.217: INFO: Got endpoints: latency-svc-pc97v [1.734723112s]
Jan  4 15:49:05.275: INFO: Created: latency-svc-9nsfb
Jan  4 15:49:05.394: INFO: Got endpoints: latency-svc-9nsfb [1.861910813s]
Jan  4 15:49:05.404: INFO: Created: latency-svc-mjhg8
Jan  4 15:49:05.407: INFO: Got endpoints: latency-svc-mjhg8 [1.730825025s]
Jan  4 15:49:05.444: INFO: Created: latency-svc-bfl4r
Jan  4 15:49:05.460: INFO: Got endpoints: latency-svc-bfl4r [1.622217869s]
Jan  4 15:49:05.615: INFO: Created: latency-svc-bcqx4
Jan  4 15:49:05.624: INFO: Got endpoints: latency-svc-bcqx4 [1.755004029s]
Jan  4 15:49:05.690: INFO: Created: latency-svc-ssvzp
Jan  4 15:49:05.690: INFO: Got endpoints: latency-svc-ssvzp [1.653034142s]
Jan  4 15:49:05.797: INFO: Created: latency-svc-cttf6
Jan  4 15:49:05.806: INFO: Got endpoints: latency-svc-cttf6 [1.707099103s]
Jan  4 15:49:05.852: INFO: Created: latency-svc-k5bgt
Jan  4 15:49:05.858: INFO: Got endpoints: latency-svc-k5bgt [1.634681711s]
Jan  4 15:49:06.091: INFO: Created: latency-svc-4z6pw
Jan  4 15:49:06.105: INFO: Got endpoints: latency-svc-4z6pw [1.819698894s]
Jan  4 15:49:06.146: INFO: Created: latency-svc-rf7jz
Jan  4 15:49:06.175: INFO: Got endpoints: latency-svc-rf7jz [1.726727862s]
Jan  4 15:49:06.318: INFO: Created: latency-svc-xdfzf
Jan  4 15:49:06.326: INFO: Got endpoints: latency-svc-xdfzf [1.716362316s]
Jan  4 15:49:06.373: INFO: Created: latency-svc-48sl9
Jan  4 15:49:06.388: INFO: Got endpoints: latency-svc-48sl9 [1.492810822s]
Jan  4 15:49:06.627: INFO: Created: latency-svc-g6mp5
Jan  4 15:49:07.782: INFO: Got endpoints: latency-svc-g6mp5 [2.857791761s]
Jan  4 15:49:07.970: INFO: Created: latency-svc-9pt5w
Jan  4 15:49:07.998: INFO: Got endpoints: latency-svc-9pt5w [2.973339035s]
Jan  4 15:49:08.061: INFO: Created: latency-svc-69znk
Jan  4 15:49:08.127: INFO: Got endpoints: latency-svc-69znk [2.922561667s]
Jan  4 15:49:08.197: INFO: Created: latency-svc-zdz7c
Jan  4 15:49:08.201: INFO: Got endpoints: latency-svc-zdz7c [2.98360046s]
Jan  4 15:49:08.392: INFO: Created: latency-svc-g9xln
Jan  4 15:49:08.450: INFO: Got endpoints: latency-svc-g9xln [3.055313352s]
Jan  4 15:49:08.452: INFO: Created: latency-svc-snsqc
Jan  4 15:49:08.480: INFO: Got endpoints: latency-svc-snsqc [3.073610494s]
Jan  4 15:49:08.589: INFO: Created: latency-svc-5xq66
Jan  4 15:49:08.628: INFO: Got endpoints: latency-svc-5xq66 [3.167693847s]
Jan  4 15:49:08.721: INFO: Created: latency-svc-k92gq
Jan  4 15:49:08.816: INFO: Got endpoints: latency-svc-k92gq [3.191260668s]
Jan  4 15:49:08.913: INFO: Created: latency-svc-qp8gq
Jan  4 15:49:08.984: INFO: Got endpoints: latency-svc-qp8gq [3.293192128s]
Jan  4 15:49:09.021: INFO: Created: latency-svc-q6znt
Jan  4 15:49:09.033: INFO: Got endpoints: latency-svc-q6znt [3.227404538s]
Jan  4 15:49:09.145: INFO: Created: latency-svc-bbsfw
Jan  4 15:49:09.150: INFO: Got endpoints: latency-svc-bbsfw [3.291819387s]
Jan  4 15:49:09.193: INFO: Created: latency-svc-5qdxw
Jan  4 15:49:09.198: INFO: Got endpoints: latency-svc-5qdxw [3.093509106s]
Jan  4 15:49:09.317: INFO: Created: latency-svc-zt5pv
Jan  4 15:49:09.335: INFO: Got endpoints: latency-svc-zt5pv [3.159602377s]
Jan  4 15:49:09.398: INFO: Created: latency-svc-bnfph
Jan  4 15:49:09.465: INFO: Got endpoints: latency-svc-bnfph [3.138618141s]
Jan  4 15:49:09.491: INFO: Created: latency-svc-vppvg
Jan  4 15:49:09.547: INFO: Got endpoints: latency-svc-vppvg [3.158519174s]
Jan  4 15:49:09.549: INFO: Created: latency-svc-h9ctr
Jan  4 15:49:09.670: INFO: Got endpoints: latency-svc-h9ctr [1.887795308s]
Jan  4 15:49:09.680: INFO: Created: latency-svc-bkzzh
Jan  4 15:49:09.687: INFO: Got endpoints: latency-svc-bkzzh [1.688809096s]
Jan  4 15:49:09.735: INFO: Created: latency-svc-2gmll
Jan  4 15:49:09.743: INFO: Got endpoints: latency-svc-2gmll [1.615150788s]
Jan  4 15:49:09.855: INFO: Created: latency-svc-8rmmj
Jan  4 15:49:09.862: INFO: Got endpoints: latency-svc-8rmmj [1.6605568s]
Jan  4 15:49:09.961: INFO: Created: latency-svc-7q7dr
Jan  4 15:49:10.039: INFO: Got endpoints: latency-svc-7q7dr [1.589090142s]
Jan  4 15:49:10.058: INFO: Created: latency-svc-ldddg
Jan  4 15:49:10.065: INFO: Got endpoints: latency-svc-ldddg [1.584787793s]
Jan  4 15:49:10.209: INFO: Created: latency-svc-q2jk9
Jan  4 15:49:10.241: INFO: Got endpoints: latency-svc-q2jk9 [1.612201031s]
Jan  4 15:49:10.284: INFO: Created: latency-svc-94fwp
Jan  4 15:49:10.298: INFO: Got endpoints: latency-svc-94fwp [1.481746203s]
Jan  4 15:49:10.434: INFO: Created: latency-svc-w25nd
Jan  4 15:49:10.441: INFO: Got endpoints: latency-svc-w25nd [1.457051889s]
Jan  4 15:49:10.485: INFO: Created: latency-svc-bdns6
Jan  4 15:49:10.496: INFO: Got endpoints: latency-svc-bdns6 [1.462641194s]
Jan  4 15:49:10.617: INFO: Created: latency-svc-6fhgl
Jan  4 15:49:10.624: INFO: Got endpoints: latency-svc-6fhgl [1.473354133s]
Jan  4 15:49:10.703: INFO: Created: latency-svc-6m2wt
Jan  4 15:49:10.779: INFO: Got endpoints: latency-svc-6m2wt [1.580275106s]
Jan  4 15:49:10.827: INFO: Created: latency-svc-2ss9h
Jan  4 15:49:10.828: INFO: Got endpoints: latency-svc-2ss9h [1.493149759s]
Jan  4 15:49:10.955: INFO: Created: latency-svc-vj5tw
Jan  4 15:49:10.959: INFO: Got endpoints: latency-svc-vj5tw [1.494374616s]
Jan  4 15:49:11.110: INFO: Created: latency-svc-9pdzk
Jan  4 15:49:11.113: INFO: Got endpoints: latency-svc-9pdzk [1.565514488s]
Jan  4 15:49:11.195: INFO: Created: latency-svc-z4b8g
Jan  4 15:49:11.297: INFO: Got endpoints: latency-svc-z4b8g [1.626696958s]
Jan  4 15:49:11.328: INFO: Created: latency-svc-76fkz
Jan  4 15:49:11.348: INFO: Got endpoints: latency-svc-76fkz [1.660745077s]
Jan  4 15:49:11.374: INFO: Created: latency-svc-vg5nh
Jan  4 15:49:11.380: INFO: Got endpoints: latency-svc-vg5nh [1.63660411s]
Jan  4 15:49:11.467: INFO: Created: latency-svc-p5qgt
Jan  4 15:49:11.473: INFO: Got endpoints: latency-svc-p5qgt [1.610449923s]
Jan  4 15:49:11.554: INFO: Created: latency-svc-cbgsn
Jan  4 15:49:11.555: INFO: Got endpoints: latency-svc-cbgsn [1.515314149s]
Jan  4 15:49:11.686: INFO: Created: latency-svc-lm4wn
Jan  4 15:49:11.691: INFO: Got endpoints: latency-svc-lm4wn [1.62566375s]
Jan  4 15:49:11.797: INFO: Created: latency-svc-njn2d
Jan  4 15:49:11.808: INFO: Got endpoints: latency-svc-njn2d [1.567182132s]
Jan  4 15:49:11.858: INFO: Created: latency-svc-7fpbj
Jan  4 15:49:11.866: INFO: Got endpoints: latency-svc-7fpbj [1.567919733s]
Jan  4 15:49:12.006: INFO: Created: latency-svc-bp4hq
Jan  4 15:49:12.018: INFO: Got endpoints: latency-svc-bp4hq [1.576410035s]
Jan  4 15:49:12.078: INFO: Created: latency-svc-bcgjm
Jan  4 15:49:12.095: INFO: Got endpoints: latency-svc-bcgjm [1.598824985s]
Jan  4 15:49:12.165: INFO: Created: latency-svc-t56th
Jan  4 15:49:12.176: INFO: Got endpoints: latency-svc-t56th [1.551923959s]
Jan  4 15:49:12.233: INFO: Created: latency-svc-7f7lz
Jan  4 15:49:12.243: INFO: Got endpoints: latency-svc-7f7lz [1.463651905s]
Jan  4 15:49:12.365: INFO: Created: latency-svc-ljs4p
Jan  4 15:49:12.397: INFO: Got endpoints: latency-svc-ljs4p [1.568988212s]
Jan  4 15:49:12.411: INFO: Created: latency-svc-6spq6
Jan  4 15:49:12.430: INFO: Got endpoints: latency-svc-6spq6 [1.470256724s]
Jan  4 15:49:12.518: INFO: Created: latency-svc-xj9g7
Jan  4 15:49:12.526: INFO: Got endpoints: latency-svc-xj9g7 [1.412826387s]
Jan  4 15:49:12.576: INFO: Created: latency-svc-d7rms
Jan  4 15:49:12.580: INFO: Got endpoints: latency-svc-d7rms [1.282872059s]
Jan  4 15:49:12.669: INFO: Created: latency-svc-g5jhs
Jan  4 15:49:12.674: INFO: Got endpoints: latency-svc-g5jhs [1.326159859s]
Jan  4 15:49:12.717: INFO: Created: latency-svc-dvdpf
Jan  4 15:49:12.733: INFO: Got endpoints: latency-svc-dvdpf [1.352587724s]
Jan  4 15:49:12.758: INFO: Created: latency-svc-stdr5
Jan  4 15:49:12.870: INFO: Created: latency-svc-rrwh9
Jan  4 15:49:12.870: INFO: Got endpoints: latency-svc-stdr5 [1.397235142s]
Jan  4 15:49:12.876: INFO: Got endpoints: latency-svc-rrwh9 [1.321172011s]
Jan  4 15:49:12.930: INFO: Created: latency-svc-w2vsc
Jan  4 15:49:12.947: INFO: Got endpoints: latency-svc-w2vsc [1.255957702s]
Jan  4 15:49:13.049: INFO: Created: latency-svc-zkntw
Jan  4 15:49:13.105: INFO: Got endpoints: latency-svc-zkntw [1.296229322s]
Jan  4 15:49:13.112: INFO: Created: latency-svc-88ctj
Jan  4 15:49:13.218: INFO: Got endpoints: latency-svc-88ctj [1.352310446s]
Jan  4 15:49:13.252: INFO: Created: latency-svc-gg4gr
Jan  4 15:49:13.259: INFO: Got endpoints: latency-svc-gg4gr [1.241538626s]
Jan  4 15:49:13.303: INFO: Created: latency-svc-nzhc6
Jan  4 15:49:13.314: INFO: Got endpoints: latency-svc-nzhc6 [1.218482412s]
Jan  4 15:49:13.405: INFO: Created: latency-svc-k6xtj
Jan  4 15:49:13.409: INFO: Got endpoints: latency-svc-k6xtj [1.23280588s]
Jan  4 15:49:13.463: INFO: Created: latency-svc-c5659
Jan  4 15:49:13.468: INFO: Got endpoints: latency-svc-c5659 [1.224948344s]
Jan  4 15:49:13.572: INFO: Created: latency-svc-78vvf
Jan  4 15:49:13.603: INFO: Got endpoints: latency-svc-78vvf [1.205216487s]
Jan  4 15:49:13.611: INFO: Created: latency-svc-wwjmf
Jan  4 15:49:13.614: INFO: Got endpoints: latency-svc-wwjmf [1.183819113s]
Jan  4 15:49:13.652: INFO: Created: latency-svc-klz7s
Jan  4 15:49:13.659: INFO: Got endpoints: latency-svc-klz7s [1.133053756s]
Jan  4 15:49:13.765: INFO: Created: latency-svc-dh8j7
Jan  4 15:49:13.808: INFO: Got endpoints: latency-svc-dh8j7 [1.228271664s]
Jan  4 15:49:13.818: INFO: Created: latency-svc-fb6jv
Jan  4 15:49:13.928: INFO: Got endpoints: latency-svc-fb6jv [1.253570689s]
Jan  4 15:49:13.930: INFO: Created: latency-svc-ct54d
Jan  4 15:49:13.958: INFO: Got endpoints: latency-svc-ct54d [1.224714657s]
Jan  4 15:49:13.996: INFO: Created: latency-svc-pt54r
Jan  4 15:49:14.100: INFO: Got endpoints: latency-svc-pt54r [1.229507299s]
Jan  4 15:49:14.108: INFO: Created: latency-svc-4qcbr
Jan  4 15:49:14.143: INFO: Created: latency-svc-ltjhs
Jan  4 15:49:14.156: INFO: Got endpoints: latency-svc-4qcbr [1.279807996s]
Jan  4 15:49:14.159: INFO: Got endpoints: latency-svc-ltjhs [1.211096424s]
Jan  4 15:49:14.185: INFO: Created: latency-svc-4sgjx
Jan  4 15:49:14.194: INFO: Got endpoints: latency-svc-4sgjx [1.088959145s]
Jan  4 15:49:14.303: INFO: Created: latency-svc-nwgkn
Jan  4 15:49:14.313: INFO: Got endpoints: latency-svc-nwgkn [1.093805184s]
Jan  4 15:49:14.355: INFO: Created: latency-svc-lkbn8
Jan  4 15:49:14.374: INFO: Got endpoints: latency-svc-lkbn8 [1.11415359s]
Jan  4 15:49:14.444: INFO: Created: latency-svc-wcmt4
Jan  4 15:49:14.452: INFO: Got endpoints: latency-svc-wcmt4 [1.137966394s]
Jan  4 15:49:14.534: INFO: Created: latency-svc-gbkjx
Jan  4 15:49:14.587: INFO: Got endpoints: latency-svc-gbkjx [1.178146848s]
Jan  4 15:49:14.619: INFO: Created: latency-svc-khl8x
Jan  4 15:49:14.629: INFO: Got endpoints: latency-svc-khl8x [1.160557341s]
Jan  4 15:49:14.683: INFO: Created: latency-svc-5shnp
Jan  4 15:49:14.735: INFO: Got endpoints: latency-svc-5shnp [1.131190739s]
Jan  4 15:49:14.779: INFO: Created: latency-svc-qnttr
Jan  4 15:49:14.796: INFO: Got endpoints: latency-svc-qnttr [1.181910811s]
Jan  4 15:49:14.830: INFO: Created: latency-svc-zlk4s
Jan  4 15:49:14.912: INFO: Got endpoints: latency-svc-zlk4s [1.252654623s]
Jan  4 15:49:14.951: INFO: Created: latency-svc-mdzgr
Jan  4 15:49:14.980: INFO: Got endpoints: latency-svc-mdzgr [1.171419573s]
Jan  4 15:49:14.982: INFO: Created: latency-svc-nlbm8
Jan  4 15:49:14.988: INFO: Got endpoints: latency-svc-nlbm8 [1.059505603s]
Jan  4 15:49:15.087: INFO: Created: latency-svc-vhlpw
Jan  4 15:49:15.094: INFO: Got endpoints: latency-svc-vhlpw [1.13528707s]
Jan  4 15:49:15.094: INFO: Latencies: [156.024706ms 173.287766ms 185.435515ms 300.806979ms 375.092012ms 461.598902ms 564.321817ms 720.405534ms 757.078876ms 943.529519ms 968.753509ms 1.059505603s 1.083627847s 1.088959145s 1.093805184s 1.11364905s 1.11415359s 1.131190739s 1.133053756s 1.13528707s 1.137966394s 1.151111689s 1.160557341s 1.171419573s 1.178146848s 1.181910811s 1.18272446s 1.183819113s 1.200173028s 1.205216487s 1.211096424s 1.218482412s 1.219552462s 1.224714657s 1.224948344s 1.228271664s 1.229507299s 1.23280588s 1.241538626s 1.252654623s 1.253570689s 1.255957702s 1.261404758s 1.268583291s 1.273254893s 1.27490307s 1.279630538s 1.279807996s 1.282872059s 1.283630889s 1.296229322s 1.302738864s 1.303170299s 1.309287875s 1.321172011s 1.326159859s 1.329259157s 1.334540389s 1.340779295s 1.341039288s 1.342878083s 1.352310446s 1.352587724s 1.355860545s 1.356402673s 1.366806031s 1.374883861s 1.382400049s 1.390041777s 1.397235142s 1.404231759s 1.411088638s 1.412826387s 1.417093746s 1.418150968s 1.423955033s 1.437927828s 1.44575382s 1.457051889s 1.460389214s 1.461631174s 1.462641194s 1.463651905s 1.466784172s 1.470256724s 1.473354133s 1.473831498s 1.480456403s 1.481746203s 1.484668617s 1.489211818s 1.492810822s 1.493149759s 1.494374616s 1.495478389s 1.504085885s 1.513495749s 1.515314149s 1.517298118s 1.520785491s 1.521613255s 1.53523387s 1.541389759s 1.545564327s 1.551923959s 1.552745024s 1.565514488s 1.565540696s 1.567182132s 1.567919733s 1.568988212s 1.571089594s 1.572002672s 1.576410035s 1.578942656s 1.580275106s 1.584787793s 1.589090142s 1.590883062s 1.591549834s 1.595797057s 1.597914713s 1.598824985s 1.599147009s 1.604061015s 1.606893101s 1.610449923s 1.612201031s 1.612537934s 1.615150788s 1.618695675s 1.61991059s 1.620858853s 1.621754773s 1.622217869s 1.624191822s 1.625481268s 1.62566375s 1.626696958s 1.634681711s 1.63660411s 1.638283247s 1.638646913s 1.64303368s 1.645394383s 1.647449329s 1.648443404s 1.653034142s 1.653852672s 1.6605568s 1.660745077s 1.66759636s 1.668293365s 1.688809096s 1.695484857s 1.701187927s 1.707099103s 1.716362316s 1.726727862s 1.730825025s 1.732795533s 1.734723112s 1.755004029s 1.755273414s 1.809687051s 1.819698894s 1.861910813s 1.887795308s 1.913766453s 1.915994181s 1.937839227s 2.024218807s 2.091182269s 2.196262669s 2.238599674s 2.271370435s 2.27199655s 2.277669274s 2.300376215s 2.306056077s 2.309293177s 2.327023411s 2.38074571s 2.384095328s 2.412624796s 2.857791761s 2.922561667s 2.973339035s 2.98360046s 3.055313352s 3.073610494s 3.093509106s 3.138618141s 3.158519174s 3.159602377s 3.167693847s 3.191260668s 3.227404538s 3.291819387s 3.293192128s]
Jan  4 15:49:15.094: INFO: 50 %ile: 1.521613255s
Jan  4 15:49:15.094: INFO: 90 %ile: 2.309293177s
Jan  4 15:49:15.094: INFO: 99 %ile: 3.291819387s
Jan  4 15:49:15.094: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 15:49:15.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svc-latency-2540" for this suite.
Jan  4 15:49:55.121: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 15:49:55.230: INFO: namespace svc-latency-2540 deletion completed in 40.129717502s

• [SLOW TEST:71.235 seconds]
[sig-network] Service endpoints latency
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 15:49:55.230: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir volume type on node default medium
Jan  4 15:49:55.377: INFO: Waiting up to 5m0s for pod "pod-5b70630b-32e4-4f96-b290-ab74537d551f" in namespace "emptydir-1559" to be "success or failure"
Jan  4 15:49:55.381: INFO: Pod "pod-5b70630b-32e4-4f96-b290-ab74537d551f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038955ms
Jan  4 15:49:57.391: INFO: Pod "pod-5b70630b-32e4-4f96-b290-ab74537d551f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013820293s
Jan  4 15:49:59.400: INFO: Pod "pod-5b70630b-32e4-4f96-b290-ab74537d551f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023469022s
Jan  4 15:50:01.407: INFO: Pod "pod-5b70630b-32e4-4f96-b290-ab74537d551f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.029833595s
Jan  4 15:50:03.413: INFO: Pod "pod-5b70630b-32e4-4f96-b290-ab74537d551f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.035717121s
Jan  4 15:50:05.418: INFO: Pod "pod-5b70630b-32e4-4f96-b290-ab74537d551f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.041007066s
STEP: Saw pod success
Jan  4 15:50:05.418: INFO: Pod "pod-5b70630b-32e4-4f96-b290-ab74537d551f" satisfied condition "success or failure"
Jan  4 15:50:05.421: INFO: Trying to get logs from node iruya-node pod pod-5b70630b-32e4-4f96-b290-ab74537d551f container test-container: 
STEP: delete the pod
Jan  4 15:50:05.576: INFO: Waiting for pod pod-5b70630b-32e4-4f96-b290-ab74537d551f to disappear
Jan  4 15:50:05.580: INFO: Pod pod-5b70630b-32e4-4f96-b290-ab74537d551f no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 15:50:05.580: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1559" for this suite.
Jan  4 15:50:11.615: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 15:50:11.735: INFO: namespace emptydir-1559 deletion completed in 6.148953324s

• [SLOW TEST:16.505 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 15:50:11.736: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan  4 15:50:11.861: INFO: Pod name rollover-pod: Found 0 pods out of 1
Jan  4 15:50:16.872: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jan  4 15:50:20.885: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Jan  4 15:50:22.895: INFO: Creating deployment "test-rollover-deployment"
Jan  4 15:50:22.924: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Jan  4 15:50:24.956: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Jan  4 15:50:24.961: INFO: Ensure that both replica sets have 1 created replica
Jan  4 15:50:24.966: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Jan  4 15:50:24.980: INFO: Updating deployment test-rollover-deployment
Jan  4 15:50:24.980: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Jan  4 15:50:27.324: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Jan  4 15:50:27.339: INFO: Make sure deployment "test-rollover-deployment" is complete
Jan  4 15:50:27.343: INFO: all replica sets need to contain the pod-template-hash label
Jan  4 15:50:27.343: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713749823, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713749823, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713749825, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713749822, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  4 15:50:29.360: INFO: all replica sets need to contain the pod-template-hash label
Jan  4 15:50:29.360: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713749823, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713749823, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713749825, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713749822, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  4 15:50:31.524: INFO: all replica sets need to contain the pod-template-hash label
Jan  4 15:50:31.524: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713749823, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713749823, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713749825, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713749822, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  4 15:50:33.371: INFO: all replica sets need to contain the pod-template-hash label
Jan  4 15:50:33.371: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713749823, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713749823, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713749825, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713749822, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  4 15:50:35.360: INFO: all replica sets need to contain the pod-template-hash label
Jan  4 15:50:35.360: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713749823, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713749823, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713749833, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713749822, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  4 15:50:37.368: INFO: all replica sets need to contain the pod-template-hash label
Jan  4 15:50:37.369: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713749823, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713749823, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713749833, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713749822, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  4 15:50:39.359: INFO: all replica sets need to contain the pod-template-hash label
Jan  4 15:50:39.359: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713749823, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713749823, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713749833, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713749822, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  4 15:50:41.383: INFO: all replica sets need to contain the pod-template-hash label
Jan  4 15:50:41.383: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713749823, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713749823, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713749833, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713749822, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  4 15:50:43.369: INFO: all replica sets need to contain the pod-template-hash label
Jan  4 15:50:43.369: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713749823, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713749823, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713749833, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713749822, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  4 15:50:45.357: INFO: 
Jan  4 15:50:45.357: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Jan  4 15:50:45.372: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:deployment-9886,SelfLink:/apis/apps/v1/namespaces/deployment-9886/deployments/test-rollover-deployment,UID:63d3a0c2-2f7c-4329-85d8-8225ff328e07,ResourceVersion:19291376,Generation:2,CreationTimestamp:2020-01-04 15:50:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-01-04 15:50:23 +0000 UTC 2020-01-04 15:50:23 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-01-04 15:50:43 +0000 UTC 2020-01-04 15:50:22 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-854595fc44" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Jan  4 15:50:45.377: INFO: New ReplicaSet "test-rollover-deployment-854595fc44" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44,GenerateName:,Namespace:deployment-9886,SelfLink:/apis/apps/v1/namespaces/deployment-9886/replicasets/test-rollover-deployment-854595fc44,UID:8fc199f1-ca68-4a12-be1b-8ee045a87034,ResourceVersion:19291364,Generation:2,CreationTimestamp:2020-01-04 15:50:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 63d3a0c2-2f7c-4329-85d8-8225ff328e07 0xc002ccff07 0xc002ccff08}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Jan  4 15:50:45.377: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Jan  4 15:50:45.377: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:deployment-9886,SelfLink:/apis/apps/v1/namespaces/deployment-9886/replicasets/test-rollover-controller,UID:5808f392-d148-476f-9683-78af1fe63c19,ResourceVersion:19291375,Generation:2,CreationTimestamp:2020-01-04 15:50:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 63d3a0c2-2f7c-4329-85d8-8225ff328e07 0xc002ccfe37 0xc002ccfe38}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan  4 15:50:45.377: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-9b8b997cf,GenerateName:,Namespace:deployment-9886,SelfLink:/apis/apps/v1/namespaces/deployment-9886/replicasets/test-rollover-deployment-9b8b997cf,UID:931fdb6e-897c-41be-bb96-d78cedc403da,ResourceVersion:19291330,Generation:2,CreationTimestamp:2020-01-04 15:50:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 63d3a0c2-2f7c-4329-85d8-8225ff328e07 0xc002ccffd0 0xc002ccffd1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan  4 15:50:45.383: INFO: Pod "test-rollover-deployment-854595fc44-k4m8q" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44-k4m8q,GenerateName:test-rollover-deployment-854595fc44-,Namespace:deployment-9886,SelfLink:/api/v1/namespaces/deployment-9886/pods/test-rollover-deployment-854595fc44-k4m8q,UID:7bdf1d8e-4875-4888-b925-e5f3cb2d3920,ResourceVersion:19291348,Generation:0,CreationTimestamp:2020-01-04 15:50:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-854595fc44 8fc199f1-ca68-4a12-be1b-8ee045a87034 0xc000cf2be7 0xc000cf2be8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5ghpg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5ghpg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-5ghpg true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000cf2c50} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000cf2c70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 15:50:25 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 15:50:33 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 15:50:33 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-04 15:50:25 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.4,StartTime:2020-01-04 15:50:25 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-01-04 15:50:33 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://4366a8997470de182a7d4596446e2efbc01d7fc36344d12c2c97df16ca67e013}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 15:50:45.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-9886" for this suite.
Jan  4 15:50:53.409: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 15:50:53.996: INFO: namespace deployment-9886 deletion completed in 8.607225035s

• [SLOW TEST:42.261 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 15:50:53.997: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on tmpfs
Jan  4 15:50:54.160: INFO: Waiting up to 5m0s for pod "pod-9ad00f7f-55be-4243-b485-0377f8ae8830" in namespace "emptydir-5132" to be "success or failure"
Jan  4 15:50:54.222: INFO: Pod "pod-9ad00f7f-55be-4243-b485-0377f8ae8830": Phase="Pending", Reason="", readiness=false. Elapsed: 62.147266ms
Jan  4 15:50:56.232: INFO: Pod "pod-9ad00f7f-55be-4243-b485-0377f8ae8830": Phase="Pending", Reason="", readiness=false. Elapsed: 2.071880633s
Jan  4 15:50:58.241: INFO: Pod "pod-9ad00f7f-55be-4243-b485-0377f8ae8830": Phase="Pending", Reason="", readiness=false. Elapsed: 4.081726078s
Jan  4 15:51:00.313: INFO: Pod "pod-9ad00f7f-55be-4243-b485-0377f8ae8830": Phase="Pending", Reason="", readiness=false. Elapsed: 6.15364597s
Jan  4 15:51:02.346: INFO: Pod "pod-9ad00f7f-55be-4243-b485-0377f8ae8830": Phase="Pending", Reason="", readiness=false. Elapsed: 8.186232586s
Jan  4 15:51:04.357: INFO: Pod "pod-9ad00f7f-55be-4243-b485-0377f8ae8830": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.197349553s
STEP: Saw pod success
Jan  4 15:51:04.357: INFO: Pod "pod-9ad00f7f-55be-4243-b485-0377f8ae8830" satisfied condition "success or failure"
Jan  4 15:51:04.362: INFO: Trying to get logs from node iruya-node pod pod-9ad00f7f-55be-4243-b485-0377f8ae8830 container test-container: 
STEP: delete the pod
Jan  4 15:51:04.412: INFO: Waiting for pod pod-9ad00f7f-55be-4243-b485-0377f8ae8830 to disappear
Jan  4 15:51:04.418: INFO: Pod pod-9ad00f7f-55be-4243-b485-0377f8ae8830 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 15:51:04.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5132" for this suite.
Jan  4 15:51:10.447: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 15:51:10.595: INFO: namespace emptydir-5132 deletion completed in 6.168426614s

• [SLOW TEST:16.598 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 15:51:10.596: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Jan  4 15:51:10.700: INFO: Waiting up to 5m0s for pod "downward-api-8fa22bf2-4977-4c8d-ba9c-704aa9a1603c" in namespace "downward-api-3509" to be "success or failure"
Jan  4 15:51:10.821: INFO: Pod "downward-api-8fa22bf2-4977-4c8d-ba9c-704aa9a1603c": Phase="Pending", Reason="", readiness=false. Elapsed: 120.766498ms
Jan  4 15:51:12.834: INFO: Pod "downward-api-8fa22bf2-4977-4c8d-ba9c-704aa9a1603c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.133902243s
Jan  4 15:51:14.841: INFO: Pod "downward-api-8fa22bf2-4977-4c8d-ba9c-704aa9a1603c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.140734992s
Jan  4 15:51:16.847: INFO: Pod "downward-api-8fa22bf2-4977-4c8d-ba9c-704aa9a1603c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.147218747s
Jan  4 15:51:18.857: INFO: Pod "downward-api-8fa22bf2-4977-4c8d-ba9c-704aa9a1603c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.157335171s
Jan  4 15:51:20.869: INFO: Pod "downward-api-8fa22bf2-4977-4c8d-ba9c-704aa9a1603c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.169085278s
STEP: Saw pod success
Jan  4 15:51:20.869: INFO: Pod "downward-api-8fa22bf2-4977-4c8d-ba9c-704aa9a1603c" satisfied condition "success or failure"
Jan  4 15:51:20.876: INFO: Trying to get logs from node iruya-node pod downward-api-8fa22bf2-4977-4c8d-ba9c-704aa9a1603c container dapi-container: 
STEP: delete the pod
Jan  4 15:51:21.016: INFO: Waiting for pod downward-api-8fa22bf2-4977-4c8d-ba9c-704aa9a1603c to disappear
Jan  4 15:51:21.035: INFO: Pod downward-api-8fa22bf2-4977-4c8d-ba9c-704aa9a1603c no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 15:51:21.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3509" for this suite.
Jan  4 15:51:27.071: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 15:51:27.217: INFO: namespace downward-api-3509 deletion completed in 6.173562091s

• [SLOW TEST:16.621 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 15:51:27.218: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on tmpfs
Jan  4 15:51:27.312: INFO: Waiting up to 5m0s for pod "pod-bf6d1683-da25-4bc0-8d88-7f518df056ad" in namespace "emptydir-2679" to be "success or failure"
Jan  4 15:51:27.321: INFO: Pod "pod-bf6d1683-da25-4bc0-8d88-7f518df056ad": Phase="Pending", Reason="", readiness=false. Elapsed: 9.011913ms
Jan  4 15:51:29.332: INFO: Pod "pod-bf6d1683-da25-4bc0-8d88-7f518df056ad": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019992612s
Jan  4 15:51:31.340: INFO: Pod "pod-bf6d1683-da25-4bc0-8d88-7f518df056ad": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028283675s
Jan  4 15:51:33.355: INFO: Pod "pod-bf6d1683-da25-4bc0-8d88-7f518df056ad": Phase="Pending", Reason="", readiness=false. Elapsed: 6.042926217s
Jan  4 15:51:35.363: INFO: Pod "pod-bf6d1683-da25-4bc0-8d88-7f518df056ad": Phase="Pending", Reason="", readiness=false. Elapsed: 8.051195844s
Jan  4 15:51:37.377: INFO: Pod "pod-bf6d1683-da25-4bc0-8d88-7f518df056ad": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.064739751s
STEP: Saw pod success
Jan  4 15:51:37.377: INFO: Pod "pod-bf6d1683-da25-4bc0-8d88-7f518df056ad" satisfied condition "success or failure"
Jan  4 15:51:37.380: INFO: Trying to get logs from node iruya-node pod pod-bf6d1683-da25-4bc0-8d88-7f518df056ad container test-container: 
STEP: delete the pod
Jan  4 15:51:37.427: INFO: Waiting for pod pod-bf6d1683-da25-4bc0-8d88-7f518df056ad to disappear
Jan  4 15:51:37.448: INFO: Pod pod-bf6d1683-da25-4bc0-8d88-7f518df056ad no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 15:51:37.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2679" for this suite.
Jan  4 15:51:43.479: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 15:51:43.601: INFO: namespace emptydir-2679 deletion completed in 6.148956003s

• [SLOW TEST:16.383 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 15:51:43.602: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod busybox-dce26918-2a23-469b-ad63-b95a9a9d09cb in namespace container-probe-2493
Jan  4 15:51:51.735: INFO: Started pod busybox-dce26918-2a23-469b-ad63-b95a9a9d09cb in namespace container-probe-2493
STEP: checking the pod's current state and verifying that restartCount is present
Jan  4 15:51:51.737: INFO: Initial restart count of pod busybox-dce26918-2a23-469b-ad63-b95a9a9d09cb is 0
Jan  4 15:52:44.741: INFO: Restart count of pod container-probe-2493/busybox-dce26918-2a23-469b-ad63-b95a9a9d09cb is now 1 (53.004073159s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 15:52:44.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-2493" for this suite.
Jan  4 15:52:50.868: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 15:52:51.011: INFO: namespace container-probe-2493 deletion completed in 6.185233851s

• [SLOW TEST:67.410 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-network] DNS 
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 15:52:51.011: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test externalName service
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-423.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-423.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-423.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-423.svc.cluster.local; sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan  4 15:53:05.264: INFO: File wheezy_udp@dns-test-service-3.dns-423.svc.cluster.local from pod  dns-423/dns-test-bc525046-b40c-4611-9b70-c59264c77d39 contains '' instead of 'foo.example.com.'
Jan  4 15:53:05.272: INFO: File jessie_udp@dns-test-service-3.dns-423.svc.cluster.local from pod  dns-423/dns-test-bc525046-b40c-4611-9b70-c59264c77d39 contains '' instead of 'foo.example.com.'
Jan  4 15:53:05.272: INFO: Lookups using dns-423/dns-test-bc525046-b40c-4611-9b70-c59264c77d39 failed for: [wheezy_udp@dns-test-service-3.dns-423.svc.cluster.local jessie_udp@dns-test-service-3.dns-423.svc.cluster.local]

Jan  4 15:53:10.443: INFO: DNS probes using dns-test-bc525046-b40c-4611-9b70-c59264c77d39 succeeded

STEP: deleting the pod
STEP: changing the externalName to bar.example.com
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-423.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-423.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-423.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-423.svc.cluster.local; sleep 1; done

STEP: creating a second pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan  4 15:53:30.970: INFO: File wheezy_udp@dns-test-service-3.dns-423.svc.cluster.local from pod  dns-423/dns-test-93424654-fe44-45a3-ae15-29497d167f84 contains '' instead of 'bar.example.com.'
Jan  4 15:53:30.975: INFO: File jessie_udp@dns-test-service-3.dns-423.svc.cluster.local from pod  dns-423/dns-test-93424654-fe44-45a3-ae15-29497d167f84 contains '' instead of 'bar.example.com.'
Jan  4 15:53:30.975: INFO: Lookups using dns-423/dns-test-93424654-fe44-45a3-ae15-29497d167f84 failed for: [wheezy_udp@dns-test-service-3.dns-423.svc.cluster.local jessie_udp@dns-test-service-3.dns-423.svc.cluster.local]

Jan  4 15:53:36.678: INFO: DNS probes using dns-test-93424654-fe44-45a3-ae15-29497d167f84 succeeded

STEP: deleting the pod
STEP: changing the service to type=ClusterIP
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-423.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-423.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-423.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-423.svc.cluster.local; sleep 1; done

STEP: creating a third pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan  4 15:53:57.148: INFO: File jessie_udp@dns-test-service-3.dns-423.svc.cluster.local from pod  dns-423/dns-test-69b97151-74cd-4baa-98ff-d55961bb1f7d contains '' instead of '10.104.148.42'
Jan  4 15:53:57.148: INFO: Lookups using dns-423/dns-test-69b97151-74cd-4baa-98ff-d55961bb1f7d failed for: [jessie_udp@dns-test-service-3.dns-423.svc.cluster.local]

Jan  4 15:54:02.801: INFO: DNS probes using dns-test-69b97151-74cd-4baa-98ff-d55961bb1f7d succeeded

STEP: deleting the pod
STEP: deleting the test externalName service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 15:54:03.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-423" for this suite.
Jan  4 15:54:13.212: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 15:54:13.538: INFO: namespace dns-423 deletion completed in 10.378324067s

• [SLOW TEST:82.526 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 15:54:13.538: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 15:54:27.097: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-338" for this suite.
Jan  4 15:55:07.122: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 15:55:07.201: INFO: namespace replication-controller-338 deletion completed in 40.098054567s

• [SLOW TEST:53.663 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 15:55:07.202: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:179
[It] should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 15:55:07.389: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-1328" for this suite.
Jan  4 15:55:33.631: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 15:55:35.471: INFO: namespace pods-1328 deletion completed in 28.064525544s

• [SLOW TEST:28.269 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be submitted and removed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 15:55:35.472: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Jan  4 15:55:57.866: INFO: Successfully updated pod "annotationupdate116e72bd-0c4b-4908-8f1b-43f5d001201e"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 15:56:00.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9127" for this suite.
Jan  4 15:56:48.136: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 15:56:48.209: INFO: namespace downward-api-9127 deletion completed in 46.320853517s

• [SLOW TEST:72.737 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 15:56:48.209: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a replication controller
Jan  4 15:56:48.477: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4717'
Jan  4 15:56:53.357: INFO: stderr: ""
Jan  4 15:56:53.358: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan  4 15:56:53.358: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4717'
Jan  4 15:56:53.595: INFO: stderr: ""
Jan  4 15:56:53.595: INFO: stdout: "update-demo-nautilus-c9dc8 update-demo-nautilus-t42wq "
Jan  4 15:56:53.596: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-c9dc8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4717'
Jan  4 15:56:54.067: INFO: stderr: ""
Jan  4 15:56:54.067: INFO: stdout: ""
Jan  4 15:56:54.067: INFO: update-demo-nautilus-c9dc8 is created but not running
Jan  4 15:56:59.067: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4717'
Jan  4 15:57:08.935: INFO: stderr: ""
Jan  4 15:57:08.935: INFO: stdout: "update-demo-nautilus-c9dc8 update-demo-nautilus-t42wq "
Jan  4 15:57:08.935: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-c9dc8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4717'
Jan  4 15:57:09.955: INFO: stderr: ""
Jan  4 15:57:09.956: INFO: stdout: ""
Jan  4 15:57:09.956: INFO: update-demo-nautilus-c9dc8 is created but not running
Jan  4 15:57:14.956: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4717'
Jan  4 15:57:17.417: INFO: stderr: ""
Jan  4 15:57:17.418: INFO: stdout: "update-demo-nautilus-c9dc8 update-demo-nautilus-t42wq "
Jan  4 15:57:17.418: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-c9dc8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4717'
Jan  4 15:57:19.005: INFO: stderr: ""
Jan  4 15:57:19.005: INFO: stdout: ""
Jan  4 15:57:19.006: INFO: update-demo-nautilus-c9dc8 is created but not running
Jan  4 15:57:24.007: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4717'
Jan  4 15:57:25.247: INFO: stderr: ""
Jan  4 15:57:25.248: INFO: stdout: "update-demo-nautilus-c9dc8 update-demo-nautilus-t42wq "
Jan  4 15:57:25.248: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-c9dc8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4717'
Jan  4 15:57:28.762: INFO: stderr: ""
Jan  4 15:57:28.762: INFO: stdout: ""
Jan  4 15:57:28.762: INFO: update-demo-nautilus-c9dc8 is created but not running
Jan  4 15:57:33.763: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4717'
Jan  4 15:57:36.450: INFO: stderr: ""
Jan  4 15:57:36.450: INFO: stdout: "update-demo-nautilus-c9dc8 update-demo-nautilus-t42wq "
Jan  4 15:57:36.450: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-c9dc8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4717'
Jan  4 15:57:37.303: INFO: stderr: ""
Jan  4 15:57:37.303: INFO: stdout: ""
Jan  4 15:57:37.303: INFO: update-demo-nautilus-c9dc8 is created but not running
Jan  4 15:57:42.304: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4717'
Jan  4 15:57:42.400: INFO: stderr: ""
Jan  4 15:57:42.400: INFO: stdout: "update-demo-nautilus-c9dc8 update-demo-nautilus-t42wq "
Jan  4 15:57:42.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-c9dc8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4717'
Jan  4 15:57:42.519: INFO: stderr: ""
Jan  4 15:57:42.519: INFO: stdout: "true"
Jan  4 15:57:42.520: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-c9dc8 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4717'
Jan  4 15:57:42.649: INFO: stderr: ""
Jan  4 15:57:42.649: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  4 15:57:42.649: INFO: validating pod update-demo-nautilus-c9dc8
Jan  4 15:57:42.657: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  4 15:57:42.657: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  4 15:57:42.657: INFO: update-demo-nautilus-c9dc8 is verified up and running
Jan  4 15:57:42.657: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-t42wq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4717'
Jan  4 15:57:42.740: INFO: stderr: ""
Jan  4 15:57:42.740: INFO: stdout: "true"
Jan  4 15:57:42.740: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-t42wq -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4717'
Jan  4 15:57:42.831: INFO: stderr: ""
Jan  4 15:57:42.831: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  4 15:57:42.831: INFO: validating pod update-demo-nautilus-t42wq
Jan  4 15:57:42.848: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  4 15:57:42.848: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  4 15:57:42.848: INFO: update-demo-nautilus-t42wq is verified up and running
STEP: scaling down the replication controller
Jan  4 15:57:42.850: INFO: scanned /root for discovery docs: 
Jan  4 15:57:42.851: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-4717'
Jan  4 15:58:14.015: INFO: stderr: ""
Jan  4 15:58:14.015: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan  4 15:58:14.016: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4717'
Jan  4 15:58:14.109: INFO: stderr: ""
Jan  4 15:58:14.109: INFO: stdout: "update-demo-nautilus-c9dc8 update-demo-nautilus-t42wq "
STEP: Replicas for name=update-demo: expected=1 actual=2
Jan  4 15:58:19.110: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4717'
Jan  4 15:58:19.197: INFO: stderr: ""
Jan  4 15:58:19.197: INFO: stdout: "update-demo-nautilus-t42wq "
Jan  4 15:58:19.197: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-t42wq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4717'
Jan  4 15:58:19.271: INFO: stderr: ""
Jan  4 15:58:19.271: INFO: stdout: "true"
Jan  4 15:58:19.271: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-t42wq -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4717'
Jan  4 15:58:19.351: INFO: stderr: ""
Jan  4 15:58:19.352: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  4 15:58:19.352: INFO: validating pod update-demo-nautilus-t42wq
Jan  4 15:58:19.370: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  4 15:58:19.370: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  4 15:58:19.371: INFO: update-demo-nautilus-t42wq is verified up and running
STEP: scaling up the replication controller
Jan  4 15:58:19.373: INFO: scanned /root for discovery docs: 
Jan  4 15:58:19.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-4717'
Jan  4 15:58:20.515: INFO: stderr: ""
Jan  4 15:58:20.515: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan  4 15:58:20.515: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4717'
Jan  4 15:58:20.601: INFO: stderr: ""
Jan  4 15:58:20.602: INFO: stdout: "update-demo-nautilus-s29tg update-demo-nautilus-t42wq "
Jan  4 15:58:20.602: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-s29tg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4717'
Jan  4 15:58:21.161: INFO: stderr: ""
Jan  4 15:58:21.161: INFO: stdout: ""
Jan  4 15:58:21.161: INFO: update-demo-nautilus-s29tg is created but not running
Jan  4 15:58:26.162: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4717'
Jan  4 15:58:28.127: INFO: stderr: ""
Jan  4 15:58:28.127: INFO: stdout: "update-demo-nautilus-s29tg update-demo-nautilus-t42wq "
Jan  4 15:58:28.128: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-s29tg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4717'
Jan  4 15:58:28.979: INFO: stderr: ""
Jan  4 15:58:28.979: INFO: stdout: ""
Jan  4 15:58:28.979: INFO: update-demo-nautilus-s29tg is created but not running
Jan  4 15:58:33.980: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4717'
Jan  4 15:58:34.685: INFO: stderr: ""
Jan  4 15:58:34.686: INFO: stdout: "update-demo-nautilus-s29tg update-demo-nautilus-t42wq "
Jan  4 15:58:34.687: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-s29tg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4717'
Jan  4 15:58:35.623: INFO: stderr: ""
Jan  4 15:58:35.623: INFO: stdout: "true"
Jan  4 15:58:35.623: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-s29tg -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4717'
Jan  4 15:58:36.471: INFO: stderr: ""
Jan  4 15:58:36.472: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  4 15:58:36.472: INFO: validating pod update-demo-nautilus-s29tg
Jan  4 15:58:36.493: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  4 15:58:36.493: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  4 15:58:36.493: INFO: update-demo-nautilus-s29tg is verified up and running
Jan  4 15:58:36.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-t42wq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4717'
Jan  4 15:58:36.606: INFO: stderr: ""
Jan  4 15:58:36.607: INFO: stdout: "true"
Jan  4 15:58:36.607: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-t42wq -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4717'
Jan  4 15:58:37.181: INFO: stderr: ""
Jan  4 15:58:37.181: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  4 15:58:37.182: INFO: validating pod update-demo-nautilus-t42wq
Jan  4 15:58:37.648: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  4 15:58:37.649: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  4 15:58:37.649: INFO: update-demo-nautilus-t42wq is verified up and running
STEP: using delete to clean up resources
Jan  4 15:58:37.649: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4717'
Jan  4 15:58:37.791: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  4 15:58:37.791: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Jan  4 15:58:37.791: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-4717'
Jan  4 15:58:38.367: INFO: stderr: "No resources found.\n"
Jan  4 15:58:38.367: INFO: stdout: ""
Jan  4 15:58:38.368: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-4717 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan  4 15:58:38.552: INFO: stderr: ""
Jan  4 15:58:38.552: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 15:58:38.553: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4717" for this suite.
Jan  4 15:59:14.617: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 15:59:14.707: INFO: namespace kubectl-4717 deletion completed in 36.128278922s

• [SLOW TEST:146.498 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should scale a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 15:59:14.707: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name s-test-opt-del-781dfb83-0d66-4410-9c38-8e6fc6169f51
STEP: Creating secret with name s-test-opt-upd-c5b3f7fc-b62a-44ac-a2aa-cbc47d22bd8a
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-781dfb83-0d66-4410-9c38-8e6fc6169f51
STEP: Updating secret s-test-opt-upd-c5b3f7fc-b62a-44ac-a2aa-cbc47d22bd8a
STEP: Creating secret with name s-test-opt-create-3e219906-c241-4fd5-8b6d-00799b7c1a0d
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 16:00:53.078: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8914" for this suite.
Jan  4 16:01:17.153: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 16:01:19.336: INFO: namespace secrets-8914 deletion completed in 26.228230976s

• [SLOW TEST:124.629 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 16:01:19.337: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating Pod
STEP: Waiting for the pod running
STEP: Geting the pod
STEP: Reading file content from the nginx-container
Jan  4 16:01:37.511: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec pod-sharedvolume-11b807e0-78e8-4637-a820-3cdbd4e970e2 -c busybox-main-container --namespace=emptydir-4073 -- cat /usr/share/volumeshare/shareddata.txt'
Jan  4 16:01:38.225: INFO: stderr: "I0104 16:01:37.836222    2978 log.go:172] (0xc0007d8160) (0xc0005d68c0) Create stream\nI0104 16:01:37.836750    2978 log.go:172] (0xc0007d8160) (0xc0005d68c0) Stream added, broadcasting: 1\nI0104 16:01:37.854872    2978 log.go:172] (0xc0007d8160) Reply frame received for 1\nI0104 16:01:37.855479    2978 log.go:172] (0xc0007d8160) (0xc0005d6960) Create stream\nI0104 16:01:37.855560    2978 log.go:172] (0xc0007d8160) (0xc0005d6960) Stream added, broadcasting: 3\nI0104 16:01:37.859129    2978 log.go:172] (0xc0007d8160) Reply frame received for 3\nI0104 16:01:37.859190    2978 log.go:172] (0xc0007d8160) (0xc00055c320) Create stream\nI0104 16:01:37.859206    2978 log.go:172] (0xc0007d8160) (0xc00055c320) Stream added, broadcasting: 5\nI0104 16:01:37.863411    2978 log.go:172] (0xc0007d8160) Reply frame received for 5\nI0104 16:01:38.024597    2978 log.go:172] (0xc0007d8160) Data frame received for 3\nI0104 16:01:38.024648    2978 log.go:172] (0xc0005d6960) (3) Data frame handling\nI0104 16:01:38.024678    2978 log.go:172] (0xc0005d6960) (3) Data frame sent\nI0104 16:01:38.213638    2978 log.go:172] (0xc0007d8160) (0xc0005d6960) Stream removed, broadcasting: 3\nI0104 16:01:38.213718    2978 log.go:172] (0xc0007d8160) Data frame received for 1\nI0104 16:01:38.213743    2978 log.go:172] (0xc0005d68c0) (1) Data frame handling\nI0104 16:01:38.213762    2978 log.go:172] (0xc0005d68c0) (1) Data frame sent\nI0104 16:01:38.213783    2978 log.go:172] (0xc0007d8160) (0xc00055c320) Stream removed, broadcasting: 5\nI0104 16:01:38.213846    2978 log.go:172] (0xc0007d8160) (0xc0005d68c0) Stream removed, broadcasting: 1\nI0104 16:01:38.213955    2978 log.go:172] (0xc0007d8160) Go away received\nI0104 16:01:38.214537    2978 log.go:172] (0xc0007d8160) (0xc0005d68c0) Stream removed, broadcasting: 1\nI0104 16:01:38.214573    2978 log.go:172] (0xc0007d8160) (0xc0005d6960) Stream removed, broadcasting: 3\nI0104 16:01:38.214580    2978 log.go:172] (0xc0007d8160) (0xc00055c320) Stream removed, broadcasting: 5\n"
Jan  4 16:01:38.225: INFO: stdout: "Hello from the busy-box sub-container\n"
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 16:01:38.225: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4073" for this suite.
Jan  4 16:01:50.291: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 16:01:50.393: INFO: namespace emptydir-4073 deletion completed in 12.159697764s

• [SLOW TEST:31.056 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 16:01:50.393: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Jan  4 16:01:50.467: INFO: Waiting up to 5m0s for pod "downward-api-a2da13f2-c8a2-48d9-89af-c4a2d85eb7e2" in namespace "downward-api-3133" to be "success or failure"
Jan  4 16:01:50.481: INFO: Pod "downward-api-a2da13f2-c8a2-48d9-89af-c4a2d85eb7e2": Phase="Pending", Reason="", readiness=false. Elapsed: 13.536647ms
Jan  4 16:01:52.492: INFO: Pod "downward-api-a2da13f2-c8a2-48d9-89af-c4a2d85eb7e2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024263879s
Jan  4 16:01:54.502: INFO: Pod "downward-api-a2da13f2-c8a2-48d9-89af-c4a2d85eb7e2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03394461s
Jan  4 16:01:56.512: INFO: Pod "downward-api-a2da13f2-c8a2-48d9-89af-c4a2d85eb7e2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.044065702s
Jan  4 16:01:58.791: INFO: Pod "downward-api-a2da13f2-c8a2-48d9-89af-c4a2d85eb7e2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.323860251s
Jan  4 16:02:00.797: INFO: Pod "downward-api-a2da13f2-c8a2-48d9-89af-c4a2d85eb7e2": Phase="Pending", Reason="", readiness=false. Elapsed: 10.329299864s
Jan  4 16:02:02.805: INFO: Pod "downward-api-a2da13f2-c8a2-48d9-89af-c4a2d85eb7e2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.336993813s
STEP: Saw pod success
Jan  4 16:02:02.805: INFO: Pod "downward-api-a2da13f2-c8a2-48d9-89af-c4a2d85eb7e2" satisfied condition "success or failure"
Jan  4 16:02:02.807: INFO: Trying to get logs from node iruya-node pod downward-api-a2da13f2-c8a2-48d9-89af-c4a2d85eb7e2 container dapi-container: 
STEP: delete the pod
Jan  4 16:02:02.866: INFO: Waiting for pod downward-api-a2da13f2-c8a2-48d9-89af-c4a2d85eb7e2 to disappear
Jan  4 16:02:02.871: INFO: Pod downward-api-a2da13f2-c8a2-48d9-89af-c4a2d85eb7e2 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 16:02:02.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3133" for this suite.
Jan  4 16:02:08.952: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 16:02:09.072: INFO: namespace downward-api-3133 deletion completed in 6.195791715s

• [SLOW TEST:18.679 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 16:02:09.073: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 16:02:09.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-2190" for this suite.
Jan  4 16:02:17.440: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 16:02:17.850: INFO: namespace kubelet-test-2190 deletion completed in 8.478981614s

• [SLOW TEST:8.778 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should be possible to delete [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 16:02:17.852: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan  4 16:02:18.158: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Jan  4 16:02:18.247: INFO: Number of nodes with available pods: 0
Jan  4 16:02:18.247: INFO: Node iruya-node is running more than one daemon pod
Jan  4 16:02:20.122: INFO: Number of nodes with available pods: 0
Jan  4 16:02:20.122: INFO: Node iruya-node is running more than one daemon pod
Jan  4 16:02:21.072: INFO: Number of nodes with available pods: 0
Jan  4 16:02:21.072: INFO: Node iruya-node is running more than one daemon pod
Jan  4 16:02:21.264: INFO: Number of nodes with available pods: 0
Jan  4 16:02:21.264: INFO: Node iruya-node is running more than one daemon pod
Jan  4 16:02:22.419: INFO: Number of nodes with available pods: 0
Jan  4 16:02:22.419: INFO: Node iruya-node is running more than one daemon pod
Jan  4 16:02:24.063: INFO: Number of nodes with available pods: 0
Jan  4 16:02:24.063: INFO: Node iruya-node is running more than one daemon pod
Jan  4 16:02:24.474: INFO: Number of nodes with available pods: 0
Jan  4 16:02:24.474: INFO: Node iruya-node is running more than one daemon pod
Jan  4 16:02:25.267: INFO: Number of nodes with available pods: 0
Jan  4 16:02:25.267: INFO: Node iruya-node is running more than one daemon pod
Jan  4 16:02:28.295: INFO: Number of nodes with available pods: 0
Jan  4 16:02:28.295: INFO: Node iruya-node is running more than one daemon pod
Jan  4 16:02:29.863: INFO: Number of nodes with available pods: 0
Jan  4 16:02:29.863: INFO: Node iruya-node is running more than one daemon pod
Jan  4 16:02:30.393: INFO: Number of nodes with available pods: 0
Jan  4 16:02:30.393: INFO: Node iruya-node is running more than one daemon pod
Jan  4 16:02:31.298: INFO: Number of nodes with available pods: 1
Jan  4 16:02:31.298: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan  4 16:02:32.269: INFO: Number of nodes with available pods: 2
Jan  4 16:02:32.269: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Jan  4 16:02:32.315: INFO: Wrong image for pod: daemon-set-d4cfs. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  4 16:02:32.315: INFO: Wrong image for pod: daemon-set-gmtsv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  4 16:02:33.386: INFO: Wrong image for pod: daemon-set-d4cfs. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  4 16:02:33.387: INFO: Wrong image for pod: daemon-set-gmtsv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  4 16:02:34.367: INFO: Wrong image for pod: daemon-set-d4cfs. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  4 16:02:34.367: INFO: Wrong image for pod: daemon-set-gmtsv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  4 16:02:35.367: INFO: Wrong image for pod: daemon-set-d4cfs. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  4 16:02:35.367: INFO: Wrong image for pod: daemon-set-gmtsv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  4 16:02:36.369: INFO: Wrong image for pod: daemon-set-d4cfs. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  4 16:02:36.369: INFO: Wrong image for pod: daemon-set-gmtsv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  4 16:02:37.367: INFO: Wrong image for pod: daemon-set-d4cfs. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  4 16:02:37.367: INFO: Wrong image for pod: daemon-set-gmtsv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  4 16:02:38.368: INFO: Wrong image for pod: daemon-set-d4cfs. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  4 16:02:38.368: INFO: Wrong image for pod: daemon-set-gmtsv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  4 16:02:38.368: INFO: Pod daemon-set-gmtsv is not available
Jan  4 16:02:39.861: INFO: Wrong image for pod: daemon-set-d4cfs. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  4 16:02:39.861: INFO: Wrong image for pod: daemon-set-gmtsv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  4 16:02:39.861: INFO: Pod daemon-set-gmtsv is not available
Jan  4 16:02:40.372: INFO: Pod daemon-set-4pcfp is not available
Jan  4 16:02:40.372: INFO: Wrong image for pod: daemon-set-d4cfs. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  4 16:02:41.368: INFO: Pod daemon-set-4pcfp is not available
Jan  4 16:02:41.368: INFO: Wrong image for pod: daemon-set-d4cfs. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  4 16:02:42.364: INFO: Pod daemon-set-4pcfp is not available
Jan  4 16:02:42.364: INFO: Wrong image for pod: daemon-set-d4cfs. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  4 16:02:43.367: INFO: Pod daemon-set-4pcfp is not available
Jan  4 16:02:43.367: INFO: Wrong image for pod: daemon-set-d4cfs. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  4 16:02:44.368: INFO: Pod daemon-set-4pcfp is not available
Jan  4 16:02:44.369: INFO: Wrong image for pod: daemon-set-d4cfs. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  4 16:02:46.691: INFO: Pod daemon-set-4pcfp is not available
Jan  4 16:02:46.691: INFO: Wrong image for pod: daemon-set-d4cfs. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  4 16:02:47.364: INFO: Pod daemon-set-4pcfp is not available
Jan  4 16:02:47.364: INFO: Wrong image for pod: daemon-set-d4cfs. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  4 16:02:48.369: INFO: Pod daemon-set-4pcfp is not available
Jan  4 16:02:48.369: INFO: Wrong image for pod: daemon-set-d4cfs. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  4 16:02:49.411: INFO: Pod daemon-set-4pcfp is not available
Jan  4 16:02:49.411: INFO: Wrong image for pod: daemon-set-d4cfs. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  4 16:02:52.546: INFO: Pod daemon-set-4pcfp is not available
Jan  4 16:02:52.546: INFO: Wrong image for pod: daemon-set-d4cfs. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  4 16:02:53.369: INFO: Pod daemon-set-4pcfp is not available
Jan  4 16:02:53.369: INFO: Wrong image for pod: daemon-set-d4cfs. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  4 16:02:54.458: INFO: Wrong image for pod: daemon-set-d4cfs. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  4 16:02:55.365: INFO: Wrong image for pod: daemon-set-d4cfs. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  4 16:02:56.364: INFO: Wrong image for pod: daemon-set-d4cfs. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  4 16:02:58.342: INFO: Wrong image for pod: daemon-set-d4cfs. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  4 16:02:58.342: INFO: Pod daemon-set-d4cfs is not available
Jan  4 16:02:59.095: INFO: Wrong image for pod: daemon-set-d4cfs. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  4 16:02:59.095: INFO: Pod daemon-set-d4cfs is not available
Jan  4 16:02:59.367: INFO: Pod daemon-set-8hshp is not available
STEP: Check that daemon pods are still running on every node of the cluster.
Jan  4 16:02:59.380: INFO: Number of nodes with available pods: 1
Jan  4 16:02:59.380: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan  4 16:03:00.572: INFO: Number of nodes with available pods: 1
Jan  4 16:03:00.572: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan  4 16:03:01.500: INFO: Number of nodes with available pods: 1
Jan  4 16:03:01.500: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan  4 16:03:02.407: INFO: Number of nodes with available pods: 1
Jan  4 16:03:02.407: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan  4 16:03:07.691: INFO: Number of nodes with available pods: 1
Jan  4 16:03:07.691: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan  4 16:03:08.400: INFO: Number of nodes with available pods: 1
Jan  4 16:03:08.400: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan  4 16:03:09.400: INFO: Number of nodes with available pods: 1
Jan  4 16:03:09.400: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan  4 16:03:10.395: INFO: Number of nodes with available pods: 1
Jan  4 16:03:10.395: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan  4 16:03:11.745: INFO: Number of nodes with available pods: 2
Jan  4 16:03:11.745: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4704, will wait for the garbage collector to delete the pods
Jan  4 16:03:12.305: INFO: Deleting DaemonSet.extensions daemon-set took: 65.565121ms
Jan  4 16:03:12.606: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.720095ms
Jan  4 16:03:19.213: INFO: Number of nodes with available pods: 0
Jan  4 16:03:19.213: INFO: Number of running nodes: 0, number of available pods: 0
Jan  4 16:03:19.217: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4704/daemonsets","resourceVersion":"19292858"},"items":null}

Jan  4 16:03:19.220: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4704/pods","resourceVersion":"19292858"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 16:03:19.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-4704" for this suite.
Jan  4 16:03:27.255: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 16:03:27.377: INFO: namespace daemonsets-4704 deletion completed in 8.143589833s

• [SLOW TEST:69.526 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 16:03:27.378: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
Jan  4 16:03:27.528: INFO: observed the pod list
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 16:03:49.653: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-5849" for this suite.
Jan  4 16:03:55.713: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 16:03:55.829: INFO: namespace pods-5849 deletion completed in 6.16680215s

• [SLOW TEST:28.452 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 16:03:55.831: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan  4 16:03:55.969: INFO: (0) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 18.248442ms)
Jan  4 16:03:55.978: INFO: (1) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.48412ms)
Jan  4 16:03:55.982: INFO: (2) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.110801ms)
Jan  4 16:03:55.986: INFO: (3) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.970971ms)
Jan  4 16:03:55.989: INFO: (4) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 2.868289ms)
Jan  4 16:03:55.992: INFO: (5) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.064611ms)
Jan  4 16:03:55.995: INFO: (6) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 2.723059ms)
Jan  4 16:03:55.998: INFO: (7) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.138246ms)
Jan  4 16:03:56.001: INFO: (8) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 2.955171ms)
Jan  4 16:03:56.004: INFO: (9) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 2.870332ms)
Jan  4 16:03:56.006: INFO: (10) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 2.697848ms)
Jan  4 16:03:56.009: INFO: (11) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 2.715068ms)
Jan  4 16:03:56.012: INFO: (12) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.400694ms)
Jan  4 16:03:56.016: INFO: (13) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.333196ms)
Jan  4 16:03:56.019: INFO: (14) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.384987ms)
Jan  4 16:03:56.022: INFO: (15) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 2.794201ms)
Jan  4 16:03:56.061: INFO: (16) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 38.946189ms)
Jan  4 16:03:56.079: INFO: (17) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 18.328322ms)
Jan  4 16:03:56.097: INFO: (18) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 17.039252ms)
Jan  4 16:03:56.105: INFO: (19) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.370955ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 16:03:56.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-5137" for this suite.
Jan  4 16:04:02.130: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 16:04:02.216: INFO: namespace proxy-5137 deletion completed in 6.104417711s

• [SLOW TEST:6.385 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 16:04:02.216: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Jan  4 16:04:12.561: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 16:04:12.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-8731" for this suite.
Jan  4 16:04:20.789: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 16:04:20.878: INFO: namespace container-runtime-8731 deletion completed in 8.155738249s

• [SLOW TEST:18.662 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 16:04:20.879: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod liveness-0ec362f0-f25f-4eb2-84c2-ac9a39cd6dd1 in namespace container-probe-1800
Jan  4 16:04:30.984: INFO: Started pod liveness-0ec362f0-f25f-4eb2-84c2-ac9a39cd6dd1 in namespace container-probe-1800
STEP: checking the pod's current state and verifying that restartCount is present
Jan  4 16:04:30.988: INFO: Initial restart count of pod liveness-0ec362f0-f25f-4eb2-84c2-ac9a39cd6dd1 is 0
Jan  4 16:04:55.694: INFO: Restart count of pod container-probe-1800/liveness-0ec362f0-f25f-4eb2-84c2-ac9a39cd6dd1 is now 1 (24.705689432s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 16:04:55.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-1800" for this suite.
Jan  4 16:05:01.759: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 16:05:04.391: INFO: namespace container-probe-1800 deletion completed in 8.648185617s

• [SLOW TEST:43.513 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period 
  should be submitted and removed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 16:05:04.392: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Delete Grace Period
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:47
[It] should be submitted and removed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: setting up selector
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
Jan  4 16:05:14.684: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0'
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
Jan  4 16:05:24.792: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 16:05:24.797: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-7366" for this suite.
Jan  4 16:05:30.828: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 16:05:30.931: INFO: namespace pods-7366 deletion completed in 6.128504689s

• [SLOW TEST:26.539 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  [k8s.io] Delete Grace Period
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be submitted and removed [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 16:05:30.931: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod liveness-ea143398-3588-46ec-aee6-2829b47e5689 in namespace container-probe-5623
Jan  4 16:05:41.050: INFO: Started pod liveness-ea143398-3588-46ec-aee6-2829b47e5689 in namespace container-probe-5623
STEP: checking the pod's current state and verifying that restartCount is present
Jan  4 16:05:41.055: INFO: Initial restart count of pod liveness-ea143398-3588-46ec-aee6-2829b47e5689 is 0
Jan  4 16:05:55.731: INFO: Restart count of pod container-probe-5623/liveness-ea143398-3588-46ec-aee6-2829b47e5689 is now 1 (14.676195591s elapsed)
Jan  4 16:06:19.319: INFO: Restart count of pod container-probe-5623/liveness-ea143398-3588-46ec-aee6-2829b47e5689 is now 2 (38.263727106s elapsed)
Jan  4 16:06:33.389: INFO: Restart count of pod container-probe-5623/liveness-ea143398-3588-46ec-aee6-2829b47e5689 is now 3 (52.33423748s elapsed)
Jan  4 16:06:55.516: INFO: Restart count of pod container-probe-5623/liveness-ea143398-3588-46ec-aee6-2829b47e5689 is now 4 (1m14.461200582s elapsed)
Jan  4 16:07:57.972: INFO: Restart count of pod container-probe-5623/liveness-ea143398-3588-46ec-aee6-2829b47e5689 is now 5 (2m16.916893693s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 16:07:58.008: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-5623" for this suite.
Jan  4 16:08:04.073: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 16:08:04.160: INFO: namespace container-probe-5623 deletion completed in 6.11039741s

• [SLOW TEST:153.229 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 16:08:04.160: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with configMap that has name projected-configmap-test-upd-4b5926d5-f290-42df-8bcb-91d21d4bb2d4
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-4b5926d5-f290-42df-8bcb-91d21d4bb2d4
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 16:08:14.445: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8634" for this suite.
Jan  4 16:08:52.544: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 16:08:52.638: INFO: namespace projected-8634 deletion completed in 38.186154756s

• [SLOW TEST:48.478 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 16:08:52.638: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override command
Jan  4 16:08:52.732: INFO: Waiting up to 5m0s for pod "client-containers-001b38b2-4df1-461e-b5cb-30bdda0ab188" in namespace "containers-6313" to be "success or failure"
Jan  4 16:08:52.797: INFO: Pod "client-containers-001b38b2-4df1-461e-b5cb-30bdda0ab188": Phase="Pending", Reason="", readiness=false. Elapsed: 65.104935ms
Jan  4 16:08:54.821: INFO: Pod "client-containers-001b38b2-4df1-461e-b5cb-30bdda0ab188": Phase="Pending", Reason="", readiness=false. Elapsed: 2.088775964s
Jan  4 16:08:57.664: INFO: Pod "client-containers-001b38b2-4df1-461e-b5cb-30bdda0ab188": Phase="Pending", Reason="", readiness=false. Elapsed: 4.93239185s
Jan  4 16:08:59.673: INFO: Pod "client-containers-001b38b2-4df1-461e-b5cb-30bdda0ab188": Phase="Pending", Reason="", readiness=false. Elapsed: 6.941236122s
Jan  4 16:09:01.697: INFO: Pod "client-containers-001b38b2-4df1-461e-b5cb-30bdda0ab188": Phase="Pending", Reason="", readiness=false. Elapsed: 8.965214837s
Jan  4 16:09:03.704: INFO: Pod "client-containers-001b38b2-4df1-461e-b5cb-30bdda0ab188": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.972083443s
STEP: Saw pod success
Jan  4 16:09:03.704: INFO: Pod "client-containers-001b38b2-4df1-461e-b5cb-30bdda0ab188" satisfied condition "success or failure"
Jan  4 16:09:03.708: INFO: Trying to get logs from node iruya-node pod client-containers-001b38b2-4df1-461e-b5cb-30bdda0ab188 container test-container: 
STEP: delete the pod
Jan  4 16:09:03.894: INFO: Waiting for pod client-containers-001b38b2-4df1-461e-b5cb-30bdda0ab188 to disappear
Jan  4 16:09:03.908: INFO: Pod client-containers-001b38b2-4df1-461e-b5cb-30bdda0ab188 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 16:09:03.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-6313" for this suite.
Jan  4 16:09:09.934: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 16:09:10.020: INFO: namespace containers-6313 deletion completed in 6.105035719s

• [SLOW TEST:17.382 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 16:09:10.021: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan  4 16:09:10.137: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1bd2df97-848e-410e-8aed-28f2ac629211" in namespace "projected-9970" to be "success or failure"
Jan  4 16:09:10.157: INFO: Pod "downwardapi-volume-1bd2df97-848e-410e-8aed-28f2ac629211": Phase="Pending", Reason="", readiness=false. Elapsed: 19.712461ms
Jan  4 16:09:13.124: INFO: Pod "downwardapi-volume-1bd2df97-848e-410e-8aed-28f2ac629211": Phase="Pending", Reason="", readiness=false. Elapsed: 2.987515103s
Jan  4 16:09:15.134: INFO: Pod "downwardapi-volume-1bd2df97-848e-410e-8aed-28f2ac629211": Phase="Pending", Reason="", readiness=false. Elapsed: 4.996732439s
Jan  4 16:09:17.140: INFO: Pod "downwardapi-volume-1bd2df97-848e-410e-8aed-28f2ac629211": Phase="Pending", Reason="", readiness=false. Elapsed: 7.003246301s
Jan  4 16:09:19.146: INFO: Pod "downwardapi-volume-1bd2df97-848e-410e-8aed-28f2ac629211": Phase="Pending", Reason="", readiness=false. Elapsed: 9.009127424s
Jan  4 16:09:21.152: INFO: Pod "downwardapi-volume-1bd2df97-848e-410e-8aed-28f2ac629211": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.014695338s
STEP: Saw pod success
Jan  4 16:09:21.152: INFO: Pod "downwardapi-volume-1bd2df97-848e-410e-8aed-28f2ac629211" satisfied condition "success or failure"
Jan  4 16:09:21.154: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-1bd2df97-848e-410e-8aed-28f2ac629211 container client-container: 
STEP: delete the pod
Jan  4 16:09:21.205: INFO: Waiting for pod downwardapi-volume-1bd2df97-848e-410e-8aed-28f2ac629211 to disappear
Jan  4 16:09:21.211: INFO: Pod downwardapi-volume-1bd2df97-848e-410e-8aed-28f2ac629211 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 16:09:21.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9970" for this suite.
Jan  4 16:09:27.244: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 16:09:27.370: INFO: namespace projected-9970 deletion completed in 6.149750138s

• [SLOW TEST:17.350 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 16:09:27.371: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 16:10:31.124: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-3371" for this suite.
Jan  4 16:10:37.149: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 16:10:37.219: INFO: namespace container-runtime-3371 deletion completed in 6.091043799s

• [SLOW TEST:69.848 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    when starting a container that exits
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39
      should run with the expected status [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 16:10:37.219: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-projected-979m
STEP: Creating a pod to test atomic-volume-subpath
Jan  4 16:10:37.296: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-979m" in namespace "subpath-6940" to be "success or failure"
Jan  4 16:10:37.341: INFO: Pod "pod-subpath-test-projected-979m": Phase="Pending", Reason="", readiness=false. Elapsed: 45.565598ms
Jan  4 16:10:39.356: INFO: Pod "pod-subpath-test-projected-979m": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060308656s
Jan  4 16:10:41.364: INFO: Pod "pod-subpath-test-projected-979m": Phase="Pending", Reason="", readiness=false. Elapsed: 4.068607703s
Jan  4 16:10:43.372: INFO: Pod "pod-subpath-test-projected-979m": Phase="Pending", Reason="", readiness=false. Elapsed: 6.076504378s
Jan  4 16:10:45.378: INFO: Pod "pod-subpath-test-projected-979m": Phase="Pending", Reason="", readiness=false. Elapsed: 8.082463953s
Jan  4 16:10:47.385: INFO: Pod "pod-subpath-test-projected-979m": Phase="Running", Reason="", readiness=true. Elapsed: 10.088909066s
Jan  4 16:10:49.392: INFO: Pod "pod-subpath-test-projected-979m": Phase="Running", Reason="", readiness=true. Elapsed: 12.096464117s
Jan  4 16:10:51.399: INFO: Pod "pod-subpath-test-projected-979m": Phase="Running", Reason="", readiness=true. Elapsed: 14.102753948s
Jan  4 16:10:53.405: INFO: Pod "pod-subpath-test-projected-979m": Phase="Running", Reason="", readiness=true. Elapsed: 16.109694386s
Jan  4 16:10:55.412: INFO: Pod "pod-subpath-test-projected-979m": Phase="Running", Reason="", readiness=true. Elapsed: 18.116500325s
Jan  4 16:10:57.419: INFO: Pod "pod-subpath-test-projected-979m": Phase="Running", Reason="", readiness=true. Elapsed: 20.123282815s
Jan  4 16:10:59.427: INFO: Pod "pod-subpath-test-projected-979m": Phase="Running", Reason="", readiness=true. Elapsed: 22.131153261s
Jan  4 16:11:01.438: INFO: Pod "pod-subpath-test-projected-979m": Phase="Running", Reason="", readiness=true. Elapsed: 24.142263313s
Jan  4 16:11:03.456: INFO: Pod "pod-subpath-test-projected-979m": Phase="Running", Reason="", readiness=true. Elapsed: 26.159779031s
Jan  4 16:11:05.464: INFO: Pod "pod-subpath-test-projected-979m": Phase="Running", Reason="", readiness=true. Elapsed: 28.168573109s
Jan  4 16:11:07.471: INFO: Pod "pod-subpath-test-projected-979m": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.175302784s
STEP: Saw pod success
Jan  4 16:11:07.471: INFO: Pod "pod-subpath-test-projected-979m" satisfied condition "success or failure"
Jan  4 16:11:07.476: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-projected-979m container test-container-subpath-projected-979m: 
STEP: delete the pod
Jan  4 16:11:07.524: INFO: Waiting for pod pod-subpath-test-projected-979m to disappear
Jan  4 16:11:07.533: INFO: Pod pod-subpath-test-projected-979m no longer exists
STEP: Deleting pod pod-subpath-test-projected-979m
Jan  4 16:11:07.533: INFO: Deleting pod "pod-subpath-test-projected-979m" in namespace "subpath-6940"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 16:11:07.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-6940" for this suite.
Jan  4 16:11:13.584: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 16:11:13.703: INFO: namespace subpath-6940 deletion completed in 6.160199649s

• [SLOW TEST:36.484 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with projected pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 16:11:13.704: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Jan  4 16:11:13.775: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan  4 16:11:13.823: INFO: Waiting for terminating namespaces to be deleted...
Jan  4 16:11:13.825: INFO: 
Logging pods the kubelet thinks is on node iruya-node before test
Jan  4 16:11:13.855: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded)
Jan  4 16:11:13.855: INFO: 	Container kube-proxy ready: true, restart count 0
Jan  4 16:11:13.855: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded)
Jan  4 16:11:13.855: INFO: 	Container weave ready: true, restart count 0
Jan  4 16:11:13.855: INFO: 	Container weave-npc ready: true, restart count 0
Jan  4 16:11:13.855: INFO: 
Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test
Jan  4 16:11:13.886: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded)
Jan  4 16:11:13.886: INFO: 	Container kube-scheduler ready: true, restart count 12
Jan  4 16:11:13.887: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Jan  4 16:11:13.887: INFO: 	Container coredns ready: true, restart count 0
Jan  4 16:11:13.887: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded)
Jan  4 16:11:13.887: INFO: 	Container etcd ready: true, restart count 0
Jan  4 16:11:13.887: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded)
Jan  4 16:11:13.887: INFO: 	Container weave ready: true, restart count 0
Jan  4 16:11:13.887: INFO: 	Container weave-npc ready: true, restart count 0
Jan  4 16:11:13.887: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Jan  4 16:11:13.887: INFO: 	Container coredns ready: true, restart count 0
Jan  4 16:11:13.887: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded)
Jan  4 16:11:13.887: INFO: 	Container kube-controller-manager ready: true, restart count 18
Jan  4 16:11:13.887: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded)
Jan  4 16:11:13.887: INFO: 	Container kube-proxy ready: true, restart count 0
Jan  4 16:11:13.887: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded)
Jan  4 16:11:13.887: INFO: 	Container kube-apiserver ready: true, restart count 0
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.15e6baf362557df5], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 16:11:14.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-2342" for this suite.
Jan  4 16:11:21.245: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 16:11:21.399: INFO: namespace sched-pred-2342 deletion completed in 6.460999718s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:7.695 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 16:11:21.399: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-4253dd81-bc08-49a3-9cf5-ea073e049cd0
STEP: Creating a pod to test consume configMaps
Jan  4 16:11:21.551: INFO: Waiting up to 5m0s for pod "pod-configmaps-0db13eec-40d0-435f-9f49-b8177564f3ab" in namespace "configmap-4813" to be "success or failure"
Jan  4 16:11:21.563: INFO: Pod "pod-configmaps-0db13eec-40d0-435f-9f49-b8177564f3ab": Phase="Pending", Reason="", readiness=false. Elapsed: 11.854623ms
Jan  4 16:11:23.572: INFO: Pod "pod-configmaps-0db13eec-40d0-435f-9f49-b8177564f3ab": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02077674s
Jan  4 16:11:25.582: INFO: Pod "pod-configmaps-0db13eec-40d0-435f-9f49-b8177564f3ab": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030847274s
Jan  4 16:11:27.593: INFO: Pod "pod-configmaps-0db13eec-40d0-435f-9f49-b8177564f3ab": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041783489s
Jan  4 16:11:29.602: INFO: Pod "pod-configmaps-0db13eec-40d0-435f-9f49-b8177564f3ab": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.050887236s
STEP: Saw pod success
Jan  4 16:11:29.602: INFO: Pod "pod-configmaps-0db13eec-40d0-435f-9f49-b8177564f3ab" satisfied condition "success or failure"
Jan  4 16:11:29.606: INFO: Trying to get logs from node iruya-node pod pod-configmaps-0db13eec-40d0-435f-9f49-b8177564f3ab container configmap-volume-test: 
STEP: delete the pod
Jan  4 16:11:29.669: INFO: Waiting for pod pod-configmaps-0db13eec-40d0-435f-9f49-b8177564f3ab to disappear
Jan  4 16:11:29.681: INFO: Pod pod-configmaps-0db13eec-40d0-435f-9f49-b8177564f3ab no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 16:11:29.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4813" for this suite.
Jan  4 16:11:35.705: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 16:11:35.890: INFO: namespace configmap-4813 deletion completed in 6.201439531s

• [SLOW TEST:14.492 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 16:11:35.891: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan  4 16:11:36.030: INFO: Waiting up to 5m0s for pod "downwardapi-volume-03e939ef-f0ef-4f0f-bce3-7a1fbdda25a1" in namespace "projected-9155" to be "success or failure"
Jan  4 16:11:36.035: INFO: Pod "downwardapi-volume-03e939ef-f0ef-4f0f-bce3-7a1fbdda25a1": Phase="Pending", Reason="", readiness=false. Elapsed: 5.025729ms
Jan  4 16:11:38.042: INFO: Pod "downwardapi-volume-03e939ef-f0ef-4f0f-bce3-7a1fbdda25a1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011267245s
Jan  4 16:11:40.050: INFO: Pod "downwardapi-volume-03e939ef-f0ef-4f0f-bce3-7a1fbdda25a1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.019699665s
Jan  4 16:11:42.059: INFO: Pod "downwardapi-volume-03e939ef-f0ef-4f0f-bce3-7a1fbdda25a1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.028973091s
Jan  4 16:11:44.068: INFO: Pod "downwardapi-volume-03e939ef-f0ef-4f0f-bce3-7a1fbdda25a1": Phase="Pending", Reason="", readiness=false. Elapsed: 8.037376074s
Jan  4 16:11:46.076: INFO: Pod "downwardapi-volume-03e939ef-f0ef-4f0f-bce3-7a1fbdda25a1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.045476263s
STEP: Saw pod success
Jan  4 16:11:46.076: INFO: Pod "downwardapi-volume-03e939ef-f0ef-4f0f-bce3-7a1fbdda25a1" satisfied condition "success or failure"
Jan  4 16:11:46.082: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-03e939ef-f0ef-4f0f-bce3-7a1fbdda25a1 container client-container: 
STEP: delete the pod
Jan  4 16:11:46.243: INFO: Waiting for pod downwardapi-volume-03e939ef-f0ef-4f0f-bce3-7a1fbdda25a1 to disappear
Jan  4 16:11:46.251: INFO: Pod downwardapi-volume-03e939ef-f0ef-4f0f-bce3-7a1fbdda25a1 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 16:11:46.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9155" for this suite.
Jan  4 16:11:52.282: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 16:11:52.392: INFO: namespace projected-9155 deletion completed in 6.135523569s

• [SLOW TEST:16.501 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 16:11:52.393: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the initial replication controller
Jan  4 16:11:52.657: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3356'
Jan  4 16:11:54.862: INFO: stderr: ""
Jan  4 16:11:54.862: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan  4 16:11:54.863: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3356'
Jan  4 16:11:55.198: INFO: stderr: ""
Jan  4 16:11:55.198: INFO: stdout: "update-demo-nautilus-mqqd9 update-demo-nautilus-p4j9l "
Jan  4 16:11:55.199: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mqqd9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3356'
Jan  4 16:11:55.403: INFO: stderr: ""
Jan  4 16:11:55.403: INFO: stdout: ""
Jan  4 16:11:55.403: INFO: update-demo-nautilus-mqqd9 is created but not running
Jan  4 16:12:00.403: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3356'
Jan  4 16:12:01.618: INFO: stderr: ""
Jan  4 16:12:01.618: INFO: stdout: "update-demo-nautilus-mqqd9 update-demo-nautilus-p4j9l "
Jan  4 16:12:01.618: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mqqd9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3356'
Jan  4 16:12:01.975: INFO: stderr: ""
Jan  4 16:12:01.975: INFO: stdout: ""
Jan  4 16:12:01.975: INFO: update-demo-nautilus-mqqd9 is created but not running
Jan  4 16:12:06.976: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3356'
Jan  4 16:12:07.215: INFO: stderr: ""
Jan  4 16:12:07.215: INFO: stdout: "update-demo-nautilus-mqqd9 update-demo-nautilus-p4j9l "
Jan  4 16:12:07.216: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mqqd9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3356'
Jan  4 16:12:07.367: INFO: stderr: ""
Jan  4 16:12:07.368: INFO: stdout: "true"
Jan  4 16:12:07.368: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mqqd9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3356'
Jan  4 16:12:07.559: INFO: stderr: ""
Jan  4 16:12:07.559: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  4 16:12:07.559: INFO: validating pod update-demo-nautilus-mqqd9
Jan  4 16:12:07.587: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  4 16:12:07.587: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  4 16:12:07.587: INFO: update-demo-nautilus-mqqd9 is verified up and running
Jan  4 16:12:07.587: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-p4j9l -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3356'
Jan  4 16:12:07.652: INFO: stderr: ""
Jan  4 16:12:07.652: INFO: stdout: "true"
Jan  4 16:12:07.653: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-p4j9l -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3356'
Jan  4 16:12:07.725: INFO: stderr: ""
Jan  4 16:12:07.725: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  4 16:12:07.725: INFO: validating pod update-demo-nautilus-p4j9l
Jan  4 16:12:07.734: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  4 16:12:07.734: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  4 16:12:07.734: INFO: update-demo-nautilus-p4j9l is verified up and running
STEP: rolling-update to new replication controller
Jan  4 16:12:07.737: INFO: scanned /root for discovery docs: 
Jan  4 16:12:07.737: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-3356'
Jan  4 16:12:40.760: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Jan  4 16:12:40.760: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan  4 16:12:40.761: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3356'
Jan  4 16:12:40.894: INFO: stderr: ""
Jan  4 16:12:40.894: INFO: stdout: "update-demo-kitten-dsrzs update-demo-kitten-jlncc "
Jan  4 16:12:40.894: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-dsrzs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3356'
Jan  4 16:12:41.005: INFO: stderr: ""
Jan  4 16:12:41.005: INFO: stdout: "true"
Jan  4 16:12:41.005: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-dsrzs -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3356'
Jan  4 16:12:41.090: INFO: stderr: ""
Jan  4 16:12:41.090: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Jan  4 16:12:41.090: INFO: validating pod update-demo-kitten-dsrzs
Jan  4 16:12:41.105: INFO: got data: {
  "image": "kitten.jpg"
}

Jan  4 16:12:41.106: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Jan  4 16:12:41.106: INFO: update-demo-kitten-dsrzs is verified up and running
Jan  4 16:12:41.106: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-jlncc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3356'
Jan  4 16:12:41.188: INFO: stderr: ""
Jan  4 16:12:41.188: INFO: stdout: "true"
Jan  4 16:12:41.188: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-jlncc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3356'
Jan  4 16:12:41.253: INFO: stderr: ""
Jan  4 16:12:41.253: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Jan  4 16:12:41.253: INFO: validating pod update-demo-kitten-jlncc
Jan  4 16:12:41.270: INFO: got data: {
  "image": "kitten.jpg"
}

Jan  4 16:12:41.271: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Jan  4 16:12:41.271: INFO: update-demo-kitten-jlncc is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 16:12:41.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3356" for this suite.
Jan  4 16:13:07.295: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 16:13:07.393: INFO: namespace kubectl-3356 deletion completed in 26.117900262s

• [SLOW TEST:75.000 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should do a rolling update of a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 16:13:07.393: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Jan  4 16:13:07.463: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-3918,SelfLink:/api/v1/namespaces/watch-3918/configmaps/e2e-watch-test-configmap-a,UID:085c907f-f86e-48f0-ad61-7583b35014b5,ResourceVersion:19294185,Generation:0,CreationTimestamp:2020-01-04 16:13:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan  4 16:13:07.463: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-3918,SelfLink:/api/v1/namespaces/watch-3918/configmaps/e2e-watch-test-configmap-a,UID:085c907f-f86e-48f0-ad61-7583b35014b5,ResourceVersion:19294185,Generation:0,CreationTimestamp:2020-01-04 16:13:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Jan  4 16:13:17.476: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-3918,SelfLink:/api/v1/namespaces/watch-3918/configmaps/e2e-watch-test-configmap-a,UID:085c907f-f86e-48f0-ad61-7583b35014b5,ResourceVersion:19294199,Generation:0,CreationTimestamp:2020-01-04 16:13:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Jan  4 16:13:17.476: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-3918,SelfLink:/api/v1/namespaces/watch-3918/configmaps/e2e-watch-test-configmap-a,UID:085c907f-f86e-48f0-ad61-7583b35014b5,ResourceVersion:19294199,Generation:0,CreationTimestamp:2020-01-04 16:13:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Jan  4 16:13:27.490: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-3918,SelfLink:/api/v1/namespaces/watch-3918/configmaps/e2e-watch-test-configmap-a,UID:085c907f-f86e-48f0-ad61-7583b35014b5,ResourceVersion:19294212,Generation:0,CreationTimestamp:2020-01-04 16:13:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan  4 16:13:27.491: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-3918,SelfLink:/api/v1/namespaces/watch-3918/configmaps/e2e-watch-test-configmap-a,UID:085c907f-f86e-48f0-ad61-7583b35014b5,ResourceVersion:19294212,Generation:0,CreationTimestamp:2020-01-04 16:13:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Jan  4 16:13:37.502: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-3918,SelfLink:/api/v1/namespaces/watch-3918/configmaps/e2e-watch-test-configmap-a,UID:085c907f-f86e-48f0-ad61-7583b35014b5,ResourceVersion:19294227,Generation:0,CreationTimestamp:2020-01-04 16:13:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan  4 16:13:37.503: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-3918,SelfLink:/api/v1/namespaces/watch-3918/configmaps/e2e-watch-test-configmap-a,UID:085c907f-f86e-48f0-ad61-7583b35014b5,ResourceVersion:19294227,Generation:0,CreationTimestamp:2020-01-04 16:13:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Jan  4 16:13:47.556: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-3918,SelfLink:/api/v1/namespaces/watch-3918/configmaps/e2e-watch-test-configmap-b,UID:f6e1d514-eae0-4f98-ac73-4923ee0bdcae,ResourceVersion:19294241,Generation:0,CreationTimestamp:2020-01-04 16:13:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan  4 16:13:47.556: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-3918,SelfLink:/api/v1/namespaces/watch-3918/configmaps/e2e-watch-test-configmap-b,UID:f6e1d514-eae0-4f98-ac73-4923ee0bdcae,ResourceVersion:19294241,Generation:0,CreationTimestamp:2020-01-04 16:13:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Jan  4 16:13:57.580: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-3918,SelfLink:/api/v1/namespaces/watch-3918/configmaps/e2e-watch-test-configmap-b,UID:f6e1d514-eae0-4f98-ac73-4923ee0bdcae,ResourceVersion:19294255,Generation:0,CreationTimestamp:2020-01-04 16:13:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan  4 16:13:57.580: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-3918,SelfLink:/api/v1/namespaces/watch-3918/configmaps/e2e-watch-test-configmap-b,UID:f6e1d514-eae0-4f98-ac73-4923ee0bdcae,ResourceVersion:19294255,Generation:0,CreationTimestamp:2020-01-04 16:13:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 16:14:07.580: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-3918" for this suite.
Jan  4 16:14:13.631: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 16:14:13.728: INFO: namespace watch-3918 deletion completed in 6.131759923s

• [SLOW TEST:66.335 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 16:14:13.729: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 16:14:24.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-6332" for this suite.
Jan  4 16:14:30.202: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 16:14:30.368: INFO: namespace emptydir-wrapper-6332 deletion completed in 6.196306701s

• [SLOW TEST:16.640 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 16:14:30.370: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
Jan  4 16:14:30.517: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-4217,SelfLink:/api/v1/namespaces/watch-4217/configmaps/e2e-watch-test-resource-version,UID:c3ec7d85-2e77-496b-aec6-ddc390a5d1a6,ResourceVersion:19294341,Generation:0,CreationTimestamp:2020-01-04 16:14:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan  4 16:14:30.518: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-4217,SelfLink:/api/v1/namespaces/watch-4217/configmaps/e2e-watch-test-resource-version,UID:c3ec7d85-2e77-496b-aec6-ddc390a5d1a6,ResourceVersion:19294342,Generation:0,CreationTimestamp:2020-01-04 16:14:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 16:14:30.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-4217" for this suite.
Jan  4 16:14:36.575: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 16:14:36.683: INFO: namespace watch-4217 deletion completed in 6.159466722s

• [SLOW TEST:6.313 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 16:14:36.684: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-1606
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan  4 16:14:36.749: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan  4 16:15:17.233: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.44.0.1 8081 | grep -v '^\s*$'] Namespace:pod-network-test-1606 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  4 16:15:17.234: INFO: >>> kubeConfig: /root/.kube/config
I0104 16:15:17.286776       8 log.go:172] (0xc000e20370) (0xc002f074a0) Create stream
I0104 16:15:17.286859       8 log.go:172] (0xc000e20370) (0xc002f074a0) Stream added, broadcasting: 1
I0104 16:15:17.293053       8 log.go:172] (0xc000e20370) Reply frame received for 1
I0104 16:15:17.293080       8 log.go:172] (0xc000e20370) (0xc002ae2e60) Create stream
I0104 16:15:17.293087       8 log.go:172] (0xc000e20370) (0xc002ae2e60) Stream added, broadcasting: 3
I0104 16:15:17.294468       8 log.go:172] (0xc000e20370) Reply frame received for 3
I0104 16:15:17.294485       8 log.go:172] (0xc000e20370) (0xc002f07540) Create stream
I0104 16:15:17.294490       8 log.go:172] (0xc000e20370) (0xc002f07540) Stream added, broadcasting: 5
I0104 16:15:17.295958       8 log.go:172] (0xc000e20370) Reply frame received for 5
I0104 16:15:18.407642       8 log.go:172] (0xc000e20370) Data frame received for 3
I0104 16:15:18.407756       8 log.go:172] (0xc002ae2e60) (3) Data frame handling
I0104 16:15:18.407780       8 log.go:172] (0xc002ae2e60) (3) Data frame sent
I0104 16:15:18.700346       8 log.go:172] (0xc000e20370) (0xc002ae2e60) Stream removed, broadcasting: 3
I0104 16:15:18.700743       8 log.go:172] (0xc000e20370) Data frame received for 1
I0104 16:15:18.700931       8 log.go:172] (0xc000e20370) (0xc002f07540) Stream removed, broadcasting: 5
I0104 16:15:18.701010       8 log.go:172] (0xc002f074a0) (1) Data frame handling
I0104 16:15:18.701023       8 log.go:172] (0xc002f074a0) (1) Data frame sent
I0104 16:15:18.701036       8 log.go:172] (0xc000e20370) (0xc002f074a0) Stream removed, broadcasting: 1
I0104 16:15:18.701053       8 log.go:172] (0xc000e20370) Go away received
I0104 16:15:18.702062       8 log.go:172] (0xc000e20370) (0xc002f074a0) Stream removed, broadcasting: 1
I0104 16:15:18.702132       8 log.go:172] (0xc000e20370) (0xc002ae2e60) Stream removed, broadcasting: 3
I0104 16:15:18.702164       8 log.go:172] (0xc000e20370) (0xc002f07540) Stream removed, broadcasting: 5
Jan  4 16:15:18.702: INFO: Found all expected endpoints: [netserver-0]
Jan  4 16:15:18.720: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:pod-network-test-1606 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  4 16:15:18.721: INFO: >>> kubeConfig: /root/.kube/config
I0104 16:15:18.870336       8 log.go:172] (0xc0008c46e0) (0xc002e1eaa0) Create stream
I0104 16:15:18.870791       8 log.go:172] (0xc0008c46e0) (0xc002e1eaa0) Stream added, broadcasting: 1
I0104 16:15:18.888007       8 log.go:172] (0xc0008c46e0) Reply frame received for 1
I0104 16:15:18.888258       8 log.go:172] (0xc0008c46e0) (0xc002ae2f00) Create stream
I0104 16:15:18.888276       8 log.go:172] (0xc0008c46e0) (0xc002ae2f00) Stream added, broadcasting: 3
I0104 16:15:18.892842       8 log.go:172] (0xc0008c46e0) Reply frame received for 3
I0104 16:15:18.892969       8 log.go:172] (0xc0008c46e0) (0xc000786c80) Create stream
I0104 16:15:18.892983       8 log.go:172] (0xc0008c46e0) (0xc000786c80) Stream added, broadcasting: 5
I0104 16:15:18.896834       8 log.go:172] (0xc0008c46e0) Reply frame received for 5
I0104 16:15:20.117286       8 log.go:172] (0xc0008c46e0) Data frame received for 3
I0104 16:15:20.117337       8 log.go:172] (0xc002ae2f00) (3) Data frame handling
I0104 16:15:20.117361       8 log.go:172] (0xc002ae2f00) (3) Data frame sent
I0104 16:15:20.232145       8 log.go:172] (0xc0008c46e0) Data frame received for 1
I0104 16:15:20.232313       8 log.go:172] (0xc0008c46e0) (0xc000786c80) Stream removed, broadcasting: 5
I0104 16:15:20.232370       8 log.go:172] (0xc002e1eaa0) (1) Data frame handling
I0104 16:15:20.232402       8 log.go:172] (0xc002e1eaa0) (1) Data frame sent
I0104 16:15:20.232459       8 log.go:172] (0xc0008c46e0) (0xc002ae2f00) Stream removed, broadcasting: 3
I0104 16:15:20.232632       8 log.go:172] (0xc0008c46e0) (0xc002e1eaa0) Stream removed, broadcasting: 1
I0104 16:15:20.232664       8 log.go:172] (0xc0008c46e0) Go away received
I0104 16:15:20.232986       8 log.go:172] (0xc0008c46e0) (0xc002e1eaa0) Stream removed, broadcasting: 1
I0104 16:15:20.233006       8 log.go:172] (0xc0008c46e0) (0xc002ae2f00) Stream removed, broadcasting: 3
I0104 16:15:20.233032       8 log.go:172] (0xc0008c46e0) (0xc000786c80) Stream removed, broadcasting: 5
Jan  4 16:15:20.233: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 16:15:20.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-1606" for this suite.
Jan  4 16:15:44.295: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 16:15:44.482: INFO: namespace pod-network-test-1606 deletion completed in 24.215710888s

• [SLOW TEST:67.799 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 16:15:44.483: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Jan  4 16:15:53.882: INFO: Expected: &{OK} to match Container's Termination Message: OK --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 16:15:54.057: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-9487" for this suite.
Jan  4 16:16:00.087: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 16:16:00.191: INFO: namespace container-runtime-9487 deletion completed in 6.126806477s

• [SLOW TEST:15.708 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 16:16:00.192: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Jan  4 16:16:00.310: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 16:16:18.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-9435" for this suite.
Jan  4 16:16:40.143: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 16:16:40.300: INFO: namespace init-container-9435 deletion completed in 22.182714599s

• [SLOW TEST:40.108 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 16:16:40.300: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod busybox-1d1c2e94-b8d8-44e1-8328-d35a8ae09a21 in namespace container-probe-2429
Jan  4 16:16:48.466: INFO: Started pod busybox-1d1c2e94-b8d8-44e1-8328-d35a8ae09a21 in namespace container-probe-2429
STEP: checking the pod's current state and verifying that restartCount is present
Jan  4 16:16:48.470: INFO: Initial restart count of pod busybox-1d1c2e94-b8d8-44e1-8328-d35a8ae09a21 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 16:20:49.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-2429" for this suite.
Jan  4 16:20:55.958: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 16:20:56.083: INFO: namespace container-probe-2429 deletion completed in 6.154897896s

• [SLOW TEST:255.783 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 16:20:56.084: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap that has name configmap-test-emptyKey-db6ba5de-66aa-4038-b81e-fb05530c1136
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 16:20:56.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7353" for this suite.
Jan  4 16:21:02.214: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 16:21:02.383: INFO: namespace configmap-7353 deletion completed in 6.221085525s

• [SLOW TEST:6.299 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 16:21:02.384: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan  4 16:21:02.454: INFO: Waiting up to 5m0s for pod "downwardapi-volume-31f352fe-37a0-4336-a69b-0a1ee8ffcaa0" in namespace "downward-api-6263" to be "success or failure"
Jan  4 16:21:02.541: INFO: Pod "downwardapi-volume-31f352fe-37a0-4336-a69b-0a1ee8ffcaa0": Phase="Pending", Reason="", readiness=false. Elapsed: 86.899326ms
Jan  4 16:21:04.558: INFO: Pod "downwardapi-volume-31f352fe-37a0-4336-a69b-0a1ee8ffcaa0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.104298919s
Jan  4 16:21:06.566: INFO: Pod "downwardapi-volume-31f352fe-37a0-4336-a69b-0a1ee8ffcaa0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.112380714s
Jan  4 16:21:08.580: INFO: Pod "downwardapi-volume-31f352fe-37a0-4336-a69b-0a1ee8ffcaa0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.12625369s
Jan  4 16:21:10.596: INFO: Pod "downwardapi-volume-31f352fe-37a0-4336-a69b-0a1ee8ffcaa0": Phase="Running", Reason="", readiness=true. Elapsed: 8.142333669s
Jan  4 16:21:12.603: INFO: Pod "downwardapi-volume-31f352fe-37a0-4336-a69b-0a1ee8ffcaa0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.149226568s
STEP: Saw pod success
Jan  4 16:21:12.603: INFO: Pod "downwardapi-volume-31f352fe-37a0-4336-a69b-0a1ee8ffcaa0" satisfied condition "success or failure"
Jan  4 16:21:12.606: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-31f352fe-37a0-4336-a69b-0a1ee8ffcaa0 container client-container: 
STEP: delete the pod
Jan  4 16:21:12.675: INFO: Waiting for pod downwardapi-volume-31f352fe-37a0-4336-a69b-0a1ee8ffcaa0 to disappear
Jan  4 16:21:12.691: INFO: Pod downwardapi-volume-31f352fe-37a0-4336-a69b-0a1ee8ffcaa0 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 16:21:12.692: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6263" for this suite.
Jan  4 16:21:18.725: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 16:21:18.853: INFO: namespace downward-api-6263 deletion completed in 6.15247195s

• [SLOW TEST:16.470 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl rolling-update 
  should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 16:21:18.854: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1516
[It] should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan  4 16:21:19.004: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-281'
Jan  4 16:21:19.157: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan  4 16:21:19.157: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
STEP: rolling-update to same image controller
Jan  4 16:21:19.188: INFO: scanned /root for discovery docs: 
Jan  4 16:21:19.188: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-281'
Jan  4 16:21:42.580: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Jan  4 16:21:42.580: INFO: stdout: "Created e2e-test-nginx-rc-b4cf66a0a74ab6bd3d09e494426a5b3f\nScaling up e2e-test-nginx-rc-b4cf66a0a74ab6bd3d09e494426a5b3f from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-b4cf66a0a74ab6bd3d09e494426a5b3f up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-b4cf66a0a74ab6bd3d09e494426a5b3f to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
Jan  4 16:21:42.580: INFO: stdout: "Created e2e-test-nginx-rc-b4cf66a0a74ab6bd3d09e494426a5b3f\nScaling up e2e-test-nginx-rc-b4cf66a0a74ab6bd3d09e494426a5b3f from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-b4cf66a0a74ab6bd3d09e494426a5b3f up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-b4cf66a0a74ab6bd3d09e494426a5b3f to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up.
Jan  4 16:21:42.581: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-281'
Jan  4 16:21:42.685: INFO: stderr: ""
Jan  4 16:21:42.686: INFO: stdout: "e2e-test-nginx-rc-b4cf66a0a74ab6bd3d09e494426a5b3f-vm54n e2e-test-nginx-rc-t7f2b "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Jan  4 16:21:47.686: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-281'
Jan  4 16:21:47.839: INFO: stderr: ""
Jan  4 16:21:47.839: INFO: stdout: "e2e-test-nginx-rc-b4cf66a0a74ab6bd3d09e494426a5b3f-vm54n "
Jan  4 16:21:47.839: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-b4cf66a0a74ab6bd3d09e494426a5b3f-vm54n -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-281'
Jan  4 16:21:48.000: INFO: stderr: ""
Jan  4 16:21:48.000: INFO: stdout: "true"
Jan  4 16:21:48.000: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-b4cf66a0a74ab6bd3d09e494426a5b3f-vm54n -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-281'
Jan  4 16:21:48.089: INFO: stderr: ""
Jan  4 16:21:48.089: INFO: stdout: "docker.io/library/nginx:1.14-alpine"
Jan  4 16:21:48.089: INFO: e2e-test-nginx-rc-b4cf66a0a74ab6bd3d09e494426a5b3f-vm54n is verified up and running
[AfterEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1522
Jan  4 16:21:48.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-281'
Jan  4 16:21:48.235: INFO: stderr: ""
Jan  4 16:21:48.235: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 16:21:48.236: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-281" for this suite.
Jan  4 16:22:10.339: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 16:22:10.461: INFO: namespace kubectl-281 deletion completed in 22.216940736s

• [SLOW TEST:51.607 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support rolling-update to same image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 16:22:10.462: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-upd-ad43659b-444e-4fa7-ac05-f99cedd373e0
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-ad43659b-444e-4fa7-ac05-f99cedd373e0
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 16:23:26.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9849" for this suite.
Jan  4 16:23:48.233: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 16:23:48.395: INFO: namespace configmap-9849 deletion completed in 22.193380825s

• [SLOW TEST:97.933 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 16:23:48.395: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 16:23:56.617: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-9857" for this suite.
Jan  4 16:24:38.669: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 16:24:38.813: INFO: namespace kubelet-test-9857 deletion completed in 42.175034858s

• [SLOW TEST:50.418 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40
    should print the output to logs [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run job 
  should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 16:24:38.814: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1612
[It] should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan  4 16:24:39.004: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-3036'
Jan  4 16:24:40.992: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan  4 16:24:40.993: INFO: stdout: "job.batch/e2e-test-nginx-job created\n"
STEP: verifying the job e2e-test-nginx-job was created
[AfterEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1617
Jan  4 16:24:41.027: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=kubectl-3036'
Jan  4 16:24:41.133: INFO: stderr: ""
Jan  4 16:24:41.134: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 16:24:41.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3036" for this suite.
Jan  4 16:24:47.252: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 16:24:47.402: INFO: namespace kubectl-3036 deletion completed in 6.26274661s

• [SLOW TEST:8.588 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a job from an image when restart is OnFailure  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 16:24:47.402: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-4795
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a new StatefulSet
Jan  4 16:24:47.537: INFO: Found 0 stateful pods, waiting for 3
Jan  4 16:24:57.550: INFO: Found 2 stateful pods, waiting for 3
Jan  4 16:25:07.562: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan  4 16:25:07.562: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan  4 16:25:07.562: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Jan  4 16:25:17.550: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan  4 16:25:17.550: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan  4 16:25:17.550: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Jan  4 16:25:17.608: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Jan  4 16:25:27.667: INFO: Updating stateful set ss2
Jan  4 16:25:27.762: INFO: Waiting for Pod statefulset-4795/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
STEP: Restoring Pods to the correct revision when they are deleted
Jan  4 16:25:38.133: INFO: Found 2 stateful pods, waiting for 3
Jan  4 16:25:48.144: INFO: Found 2 stateful pods, waiting for 3
Jan  4 16:25:58.145: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan  4 16:25:58.146: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan  4 16:25:58.146: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Jan  4 16:26:08.144: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan  4 16:26:08.145: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan  4 16:26:08.145: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Jan  4 16:26:08.179: INFO: Updating stateful set ss2
Jan  4 16:26:08.219: INFO: Waiting for Pod statefulset-4795/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  4 16:26:18.259: INFO: Updating stateful set ss2
Jan  4 16:26:18.274: INFO: Waiting for StatefulSet statefulset-4795/ss2 to complete update
Jan  4 16:26:18.274: INFO: Waiting for Pod statefulset-4795/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  4 16:26:28.294: INFO: Waiting for StatefulSet statefulset-4795/ss2 to complete update
Jan  4 16:26:28.294: INFO: Waiting for Pod statefulset-4795/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  4 16:26:38.287: INFO: Waiting for StatefulSet statefulset-4795/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Jan  4 16:26:48.293: INFO: Deleting all statefulset in ns statefulset-4795
Jan  4 16:26:48.299: INFO: Scaling statefulset ss2 to 0
Jan  4 16:27:18.336: INFO: Waiting for statefulset status.replicas updated to 0
Jan  4 16:27:18.341: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 16:27:18.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-4795" for this suite.
Jan  4 16:27:26.469: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 16:27:26.607: INFO: namespace statefulset-4795 deletion completed in 8.211722926s

• [SLOW TEST:159.205 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 16:27:26.608: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting a background goroutine to produce watch events
STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 16:27:32.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-9895" for this suite.
Jan  4 16:27:38.332: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 16:27:38.506: INFO: namespace watch-9895 deletion completed in 6.26922151s

• [SLOW TEST:11.898 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 16:27:38.507: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting the proxy server
Jan  4 16:27:38.601: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 16:27:38.685: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6320" for this suite.
Jan  4 16:27:44.711: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 16:27:44.844: INFO: namespace kubectl-6320 deletion completed in 6.152478289s

• [SLOW TEST:6.337 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support proxy with --port 0  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  4 16:27:44.845: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1685
[It] should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan  4 16:27:44.973: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-1673'
Jan  4 16:27:45.206: INFO: stderr: ""
Jan  4 16:27:45.206: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod was created
[AfterEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1690
Jan  4 16:27:45.211: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-1673'
Jan  4 16:27:48.882: INFO: stderr: ""
Jan  4 16:27:48.882: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  4 16:27:48.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1673" for this suite.
Jan  4 16:27:54.917: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  4 16:27:55.024: INFO: namespace kubectl-1673 deletion completed in 6.130755732s

• [SLOW TEST:10.180 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a pod from an image when restart is Never  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSJan  4 16:27:55.025: INFO: Running AfterSuite actions on all nodes
Jan  4 16:27:55.025: INFO: Running AfterSuite actions on node 1
Jan  4 16:27:55.025: INFO: Skipping dumping logs from cluster

Ran 215 of 4412 Specs in 9066.043 seconds
SUCCESS! -- 215 Passed | 0 Failed | 0 Pending | 4197 Skipped
PASS