I0101 12:57:03.261272 8 e2e.go:243] Starting e2e run "87e7bcb4-9d7a-4846-b361-e187da49d16d" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1577883421 - Will randomize all specs Will run 215 of 4412 specs Jan 1 12:57:03.805: INFO: >>> kubeConfig: /root/.kube/config Jan 1 12:57:03.814: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jan 1 12:57:03.880: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jan 1 12:57:03.978: INFO: 10 / 10 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jan 1 12:57:03.978: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Jan 1 12:57:03.978: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jan 1 12:57:03.997: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jan 1 12:57:03.998: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Jan 1 12:57:03.998: INFO: e2e test version: v1.15.7 Jan 1 12:57:04.000: INFO: kube-apiserver version: v1.15.1 [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 1 12:57:04.000: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets Jan 1 12:57:04.173: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating secret secrets-9071/secret-test-e73ca260-2704-4b27-950b-4b14bc71175d STEP: Creating a pod to test consume secrets Jan 1 12:57:04.189: INFO: Waiting up to 5m0s for pod "pod-configmaps-bcc18417-d859-43a4-81a8-7672dccafd3b" in namespace "secrets-9071" to be "success or failure" Jan 1 12:57:04.197: INFO: Pod "pod-configmaps-bcc18417-d859-43a4-81a8-7672dccafd3b": Phase="Pending", Reason="", readiness=false. Elapsed: 7.991487ms Jan 1 12:57:06.208: INFO: Pod "pod-configmaps-bcc18417-d859-43a4-81a8-7672dccafd3b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018814513s Jan 1 12:57:08.216: INFO: Pod "pod-configmaps-bcc18417-d859-43a4-81a8-7672dccafd3b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026670789s Jan 1 12:57:10.223: INFO: Pod "pod-configmaps-bcc18417-d859-43a4-81a8-7672dccafd3b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.034394986s Jan 1 12:57:12.235: INFO: Pod "pod-configmaps-bcc18417-d859-43a4-81a8-7672dccafd3b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.046282652s Jan 1 12:57:14.244: INFO: Pod "pod-configmaps-bcc18417-d859-43a4-81a8-7672dccafd3b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.054848137s STEP: Saw pod success Jan 1 12:57:14.244: INFO: Pod "pod-configmaps-bcc18417-d859-43a4-81a8-7672dccafd3b" satisfied condition "success or failure" Jan 1 12:57:14.247: INFO: Trying to get logs from node iruya-node pod pod-configmaps-bcc18417-d859-43a4-81a8-7672dccafd3b container env-test: STEP: delete the pod Jan 1 12:57:14.328: INFO: Waiting for pod pod-configmaps-bcc18417-d859-43a4-81a8-7672dccafd3b to disappear Jan 1 12:57:14.336: INFO: Pod pod-configmaps-bcc18417-d859-43a4-81a8-7672dccafd3b no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 1 12:57:14.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9071" for this suite. Jan 1 12:57:20.467: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 12:57:20.638: INFO: namespace secrets-9071 deletion completed in 6.292438525s • [SLOW TEST:16.638 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 1 12:57:20.641: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 1 12:57:20.725: INFO: Creating deployment "nginx-deployment" Jan 1 12:57:20.789: INFO: Waiting for observed generation 1 Jan 1 12:57:23.203: INFO: Waiting for all required pods to come up Jan 1 12:57:24.597: INFO: Pod name nginx: Found 10 pods out of 10 STEP: ensuring each pod is running Jan 1 12:57:55.190: INFO: Waiting for deployment "nginx-deployment" to complete Jan 1 12:57:55.198: INFO: Updating deployment "nginx-deployment" with a non-existent image Jan 1 12:57:55.208: INFO: Updating deployment nginx-deployment Jan 1 12:57:55.208: INFO: Waiting for observed generation 2 Jan 1 12:57:58.349: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Jan 1 12:58:00.898: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Jan 1 12:58:01.280: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Jan 1 12:58:01.313: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Jan 1 12:58:01.313: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Jan 1 12:58:01.317: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Jan 1 12:58:01.321: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas Jan 1 12:58:01.321: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30 Jan 1 12:58:01.331: INFO: Updating deployment nginx-deployment Jan 1 12:58:01.331: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas Jan 1 12:58:01.533: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Jan 1 12:58:02.275: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Jan 1 12:58:03.776: INFO: Deployment "nginx-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:deployment-8141,SelfLink:/apis/apps/v1/namespaces/deployment-8141/deployments/nginx-deployment,UID:9ae66602-1f10-4868-a660-9f48ef1cc7ad,ResourceVersion:18891864,Generation:3,CreationTimestamp:2020-01-01 12:57:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Progressing True 2020-01-01 12:58:00 +0000 UTC 2020-01-01 12:57:20 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-55fb7cb77f" is progressing.} {Available False 2020-01-01 12:58:01 +0000 UTC 2020-01-01 12:58:01 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},} Jan 1 12:58:05.989: INFO: New ReplicaSet "nginx-deployment-55fb7cb77f" of Deployment "nginx-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f,GenerateName:,Namespace:deployment-8141,SelfLink:/apis/apps/v1/namespaces/deployment-8141/replicasets/nginx-deployment-55fb7cb77f,UID:2c49af4e-7c2d-498b-a6cb-e74ec0778350,ResourceVersion:18891909,Generation:3,CreationTimestamp:2020-01-01 12:57:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 9ae66602-1f10-4868-a660-9f48ef1cc7ad 0xc002ab8bd7 0xc002ab8bd8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jan 1 12:58:05.989: INFO: All old ReplicaSets of Deployment "nginx-deployment": Jan 1 12:58:05.989: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498,GenerateName:,Namespace:deployment-8141,SelfLink:/apis/apps/v1/namespaces/deployment-8141/replicasets/nginx-deployment-7b8c6f4498,UID:3e33e123-64ae-407a-abb0-1e789454bd03,ResourceVersion:18891906,Generation:3,CreationTimestamp:2020-01-01 12:57:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 9ae66602-1f10-4868-a660-9f48ef1cc7ad 0xc002ab8ca7 0xc002ab8ca8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},} Jan 1 12:58:07.205: INFO: Pod "nginx-deployment-55fb7cb77f-62v5j" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-62v5j,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8141,SelfLink:/api/v1/namespaces/deployment-8141/pods/nginx-deployment-55fb7cb77f-62v5j,UID:bd19d65f-c8ec-4f1a-9cea-def210cde4ed,ResourceVersion:18891898,Generation:0,CreationTimestamp:2020-01-01 12:58:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 2c49af4e-7c2d-498b-a6cb-e74ec0778350 0xc002a7ede7 0xc002a7ede8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-wb924 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wb924,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-wb924 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002a7ee50} {node.kubernetes.io/unreachable Exists NoExecute 0xc002a7ee70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:58:02 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 1 12:58:07.206: INFO: Pod "nginx-deployment-55fb7cb77f-9l9v4" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-9l9v4,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8141,SelfLink:/api/v1/namespaces/deployment-8141/pods/nginx-deployment-55fb7cb77f-9l9v4,UID:0cef64ce-f331-4fe3-90c2-39477f5fec56,ResourceVersion:18891846,Generation:0,CreationTimestamp:2020-01-01 12:57:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 2c49af4e-7c2d-498b-a6cb-e74ec0778350 0xc002a7eef7 0xc002a7eef8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-wb924 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wb924,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-wb924 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002a7ef60} {node.kubernetes.io/unreachable Exists NoExecute 0xc002a7ef80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:57:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:57:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:57:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:57:58 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-01-01 12:57:58 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 1 12:58:07.206: INFO: Pod "nginx-deployment-55fb7cb77f-cplpx" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-cplpx,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8141,SelfLink:/api/v1/namespaces/deployment-8141/pods/nginx-deployment-55fb7cb77f-cplpx,UID:9205737c-a0f1-4eda-aee1-746a11ea4c9e,ResourceVersion:18891821,Generation:0,CreationTimestamp:2020-01-01 12:57:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 2c49af4e-7c2d-498b-a6cb-e74ec0778350 0xc002a7f057 0xc002a7f058}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-wb924 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wb924,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-wb924 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002a7f0d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002a7f0f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:57:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:57:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:57:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:57:55 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-01-01 12:57:55 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 1 12:58:07.206: INFO: Pod "nginx-deployment-55fb7cb77f-d2fh8" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-d2fh8,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8141,SelfLink:/api/v1/namespaces/deployment-8141/pods/nginx-deployment-55fb7cb77f-d2fh8,UID:58ac1371-4f74-4ed3-b794-d6e093c8ec0e,ResourceVersion:18891889,Generation:0,CreationTimestamp:2020-01-01 12:58:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 2c49af4e-7c2d-498b-a6cb-e74ec0778350 0xc002a7f1c7 0xc002a7f1c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-wb924 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wb924,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-wb924 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002a7f230} {node.kubernetes.io/unreachable Exists NoExecute 0xc002a7f250}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:58:02 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 1 12:58:07.212: INFO: Pod "nginx-deployment-55fb7cb77f-kbs96" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-kbs96,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8141,SelfLink:/api/v1/namespaces/deployment-8141/pods/nginx-deployment-55fb7cb77f-kbs96,UID:9b4f8436-4299-4cda-91ed-b7c53ca2c6f0,ResourceVersion:18891894,Generation:0,CreationTimestamp:2020-01-01 12:58:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 2c49af4e-7c2d-498b-a6cb-e74ec0778350 0xc002a7f2d7 0xc002a7f2d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-wb924 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wb924,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-wb924 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002a7f350} {node.kubernetes.io/unreachable Exists NoExecute 0xc002a7f370}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:58:02 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 1 12:58:07.213: INFO: Pod "nginx-deployment-55fb7cb77f-m4brz" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-m4brz,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8141,SelfLink:/api/v1/namespaces/deployment-8141/pods/nginx-deployment-55fb7cb77f-m4brz,UID:1eab58bd-6672-459f-9936-b6901911ed07,ResourceVersion:18891905,Generation:0,CreationTimestamp:2020-01-01 12:58:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 2c49af4e-7c2d-498b-a6cb-e74ec0778350 0xc002a7f3f7 0xc002a7f3f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-wb924 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wb924,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-wb924 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002a7f470} {node.kubernetes.io/unreachable Exists NoExecute 0xc002a7f490}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:58:03 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 1 12:58:07.213: INFO: Pod "nginx-deployment-55fb7cb77f-pkxql" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-pkxql,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8141,SelfLink:/api/v1/namespaces/deployment-8141/pods/nginx-deployment-55fb7cb77f-pkxql,UID:753bf878-a537-4e97-a7e9-6629057a4ed2,ResourceVersion:18891896,Generation:0,CreationTimestamp:2020-01-01 12:58:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 2c49af4e-7c2d-498b-a6cb-e74ec0778350 0xc002a7f517 0xc002a7f518}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-wb924 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wb924,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-wb924 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002a7f590} {node.kubernetes.io/unreachable Exists NoExecute 0xc002a7f5b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:58:02 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 1 12:58:07.216: INFO: Pod "nginx-deployment-55fb7cb77f-q95b7" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-q95b7,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8141,SelfLink:/api/v1/namespaces/deployment-8141/pods/nginx-deployment-55fb7cb77f-q95b7,UID:40870ff3-487f-4d2b-b7b7-762f14249953,ResourceVersion:18891850,Generation:0,CreationTimestamp:2020-01-01 12:57:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 2c49af4e-7c2d-498b-a6cb-e74ec0778350 0xc002a7f637 0xc002a7f638}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-wb924 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wb924,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-wb924 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002a7f6b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002a7f6d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:57:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:57:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:57:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:57:58 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-01-01 12:57:59 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 1 12:58:07.218: INFO: Pod "nginx-deployment-55fb7cb77f-s98rk" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-s98rk,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8141,SelfLink:/api/v1/namespaces/deployment-8141/pods/nginx-deployment-55fb7cb77f-s98rk,UID:328176a1-cd37-4e29-adbf-1bd8599755a8,ResourceVersion:18891901,Generation:0,CreationTimestamp:2020-01-01 12:58:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 2c49af4e-7c2d-498b-a6cb-e74ec0778350 0xc002a7f7a7 0xc002a7f7a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-wb924 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wb924,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-wb924 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002a7f810} {node.kubernetes.io/unreachable Exists NoExecute 0xc002a7f830}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:58:02 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 1 12:58:07.218: INFO: Pod "nginx-deployment-55fb7cb77f-slgn9" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-slgn9,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8141,SelfLink:/api/v1/namespaces/deployment-8141/pods/nginx-deployment-55fb7cb77f-slgn9,UID:dd640cba-0d89-4dcc-a90f-310476d8a6f1,ResourceVersion:18891868,Generation:0,CreationTimestamp:2020-01-01 12:58:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 2c49af4e-7c2d-498b-a6cb-e74ec0778350 0xc002a7f8b7 0xc002a7f8b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-wb924 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wb924,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-wb924 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002a7f920} {node.kubernetes.io/unreachable Exists NoExecute 0xc002a7f940}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:58:01 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 1 12:58:07.219: INFO: Pod "nginx-deployment-55fb7cb77f-t6j5d" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-t6j5d,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8141,SelfLink:/api/v1/namespaces/deployment-8141/pods/nginx-deployment-55fb7cb77f-t6j5d,UID:c4352775-71e0-4be5-8a10-e21d65eee975,ResourceVersion:18891832,Generation:0,CreationTimestamp:2020-01-01 12:57:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 2c49af4e-7c2d-498b-a6cb-e74ec0778350 0xc002a7f9c7 0xc002a7f9c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-wb924 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wb924,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-wb924 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002a7fa40} {node.kubernetes.io/unreachable Exists NoExecute 0xc002a7fa60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:57:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:57:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:57:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:57:55 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-01-01 12:57:55 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 1 12:58:07.220: INFO: Pod "nginx-deployment-55fb7cb77f-x6vwf" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-x6vwf,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8141,SelfLink:/api/v1/namespaces/deployment-8141/pods/nginx-deployment-55fb7cb77f-x6vwf,UID:463d6ef1-5b1a-4553-8451-49647a5076a4,ResourceVersion:18891820,Generation:0,CreationTimestamp:2020-01-01 12:57:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 2c49af4e-7c2d-498b-a6cb-e74ec0778350 0xc002a7fb37 0xc002a7fb38}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-wb924 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wb924,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-wb924 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002a7fba0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002a7fbc0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:57:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:57:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:57:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:57:55 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-01-01 12:57:55 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 1 12:58:07.220: INFO: Pod "nginx-deployment-55fb7cb77f-zp9h6" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-zp9h6,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8141,SelfLink:/api/v1/namespaces/deployment-8141/pods/nginx-deployment-55fb7cb77f-zp9h6,UID:cdeefb90-8b62-47ed-ac34-645e569d9b2b,ResourceVersion:18891885,Generation:0,CreationTimestamp:2020-01-01 12:58:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 2c49af4e-7c2d-498b-a6cb-e74ec0778350 0xc002a7fc97 0xc002a7fc98}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-wb924 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wb924,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-wb924 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002a7fd10} {node.kubernetes.io/unreachable Exists NoExecute 0xc002a7fd30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:58:02 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 1 12:58:07.221: INFO: Pod "nginx-deployment-7b8c6f4498-6xhv5" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-6xhv5,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8141,SelfLink:/api/v1/namespaces/deployment-8141/pods/nginx-deployment-7b8c6f4498-6xhv5,UID:7a46b650-cd23-45ea-8b33-c63d133208a5,ResourceVersion:18891890,Generation:0,CreationTimestamp:2020-01-01 12:58:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 3e33e123-64ae-407a-abb0-1e789454bd03 0xc002a7fdb7 0xc002a7fdb8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-wb924 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wb924,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-wb924 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002a7fe30} {node.kubernetes.io/unreachable Exists NoExecute 0xc002a7fe50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:58:02 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 1 12:58:07.222: INFO: Pod "nginx-deployment-7b8c6f4498-75h2g" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-75h2g,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8141,SelfLink:/api/v1/namespaces/deployment-8141/pods/nginx-deployment-7b8c6f4498-75h2g,UID:b76a7b7a-ba5b-4fc3-ab86-1957c9ce5a19,ResourceVersion:18891900,Generation:0,CreationTimestamp:2020-01-01 12:58:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 3e33e123-64ae-407a-abb0-1e789454bd03 0xc002a7fed7 0xc002a7fed8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-wb924 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wb924,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-wb924 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002a7ff40} {node.kubernetes.io/unreachable Exists NoExecute 0xc002a7ff60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:58:02 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 1 12:58:07.222: INFO: Pod "nginx-deployment-7b8c6f4498-8x4mm" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-8x4mm,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8141,SelfLink:/api/v1/namespaces/deployment-8141/pods/nginx-deployment-7b8c6f4498-8x4mm,UID:a99a8485-6989-4527-a069-de214a5db41f,ResourceVersion:18891891,Generation:0,CreationTimestamp:2020-01-01 12:58:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 3e33e123-64ae-407a-abb0-1e789454bd03 0xc002a7ffe7 0xc002a7ffe8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-wb924 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wb924,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-wb924 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002d70050} {node.kubernetes.io/unreachable Exists NoExecute 0xc002d70070}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:58:02 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 1 12:58:07.223: INFO: Pod "nginx-deployment-7b8c6f4498-97gbb" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-97gbb,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8141,SelfLink:/api/v1/namespaces/deployment-8141/pods/nginx-deployment-7b8c6f4498-97gbb,UID:f2729498-49df-470f-ae33-1f3cc0e321fb,ResourceVersion:18891899,Generation:0,CreationTimestamp:2020-01-01 12:58:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 3e33e123-64ae-407a-abb0-1e789454bd03 0xc002d700f7 0xc002d700f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-wb924 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wb924,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-wb924 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002d70170} {node.kubernetes.io/unreachable Exists NoExecute 0xc002d70190}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:58:02 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 1 12:58:07.224: INFO: Pod "nginx-deployment-7b8c6f4498-b8w7v" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-b8w7v,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8141,SelfLink:/api/v1/namespaces/deployment-8141/pods/nginx-deployment-7b8c6f4498-b8w7v,UID:fe3d71e7-193d-46ff-88d2-51cb92c4e888,ResourceVersion:18891915,Generation:0,CreationTimestamp:2020-01-01 12:58:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 3e33e123-64ae-407a-abb0-1e789454bd03 0xc002d70217 0xc002d70218}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-wb924 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wb924,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-wb924 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002d70290} {node.kubernetes.io/unreachable Exists NoExecute 0xc002d702b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:58:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:58:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:58:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:58:01 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-01-01 12:58:03 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 1 12:58:07.225: INFO: Pod "nginx-deployment-7b8c6f4498-bfz8z" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-bfz8z,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8141,SelfLink:/api/v1/namespaces/deployment-8141/pods/nginx-deployment-7b8c6f4498-bfz8z,UID:6ec58553-f9b9-403b-a53c-058095d1672c,ResourceVersion:18891789,Generation:0,CreationTimestamp:2020-01-01 12:57:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 3e33e123-64ae-407a-abb0-1e789454bd03 0xc002d70377 0xc002d70378}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-wb924 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wb924,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-wb924 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002d703f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002d70410}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:57:20 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:57:54 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:57:54 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:57:20 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.6,StartTime:2020-01-01 12:57:20 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-01 12:57:53 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://2e9d897bc21b50d6754a0ecbd66ae16e2dc580a3dcfaa8905b225956f86014c0}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 1 12:58:07.227: INFO: Pod "nginx-deployment-7b8c6f4498-bzchl" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-bzchl,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8141,SelfLink:/api/v1/namespaces/deployment-8141/pods/nginx-deployment-7b8c6f4498-bzchl,UID:70a18f1a-9270-4bda-b0a9-feb716d47054,ResourceVersion:18891756,Generation:0,CreationTimestamp:2020-01-01 12:57:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 3e33e123-64ae-407a-abb0-1e789454bd03 0xc002d704e7 0xc002d704e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-wb924 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wb924,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-wb924 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002d70550} {node.kubernetes.io/unreachable Exists NoExecute 0xc002d70570}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:57:21 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:57:48 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:57:48 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:57:20 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.5,StartTime:2020-01-01 12:57:21 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-01 12:57:47 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://c47b2987ce1636569706b67061696129c0e2bbf1268bc7b821701a4b07ec6ca7}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 1 12:58:07.228: INFO: Pod "nginx-deployment-7b8c6f4498-dnw84" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-dnw84,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8141,SelfLink:/api/v1/namespaces/deployment-8141/pods/nginx-deployment-7b8c6f4498-dnw84,UID:b235cb35-247f-4010-a338-6f323c793094,ResourceVersion:18891773,Generation:0,CreationTimestamp:2020-01-01 12:57:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 3e33e123-64ae-407a-abb0-1e789454bd03 0xc002d70647 0xc002d70648}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-wb924 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wb924,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-wb924 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002d706c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002d706e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:57:21 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:57:53 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:57:53 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:57:20 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2020-01-01 12:57:21 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-01 12:57:52 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://5ba925215aeedabe47e5af8b6fa1951a28debc796efe7efb054c4bcbb945c4f5}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 1 12:58:07.230: INFO: Pod "nginx-deployment-7b8c6f4498-f8tjz" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-f8tjz,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8141,SelfLink:/api/v1/namespaces/deployment-8141/pods/nginx-deployment-7b8c6f4498-f8tjz,UID:f1a43fc2-f36b-416e-94f6-6370c57eaa1b,ResourceVersion:18891888,Generation:0,CreationTimestamp:2020-01-01 12:58:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 3e33e123-64ae-407a-abb0-1e789454bd03 0xc002d707b7 0xc002d707b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-wb924 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wb924,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-wb924 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002d70830} {node.kubernetes.io/unreachable Exists NoExecute 0xc002d70850}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:58:02 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 1 12:58:07.233: INFO: Pod "nginx-deployment-7b8c6f4498-fsfnh" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-fsfnh,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8141,SelfLink:/api/v1/namespaces/deployment-8141/pods/nginx-deployment-7b8c6f4498-fsfnh,UID:f07f454a-0837-4070-b6a5-3770fa53803e,ResourceVersion:18891759,Generation:0,CreationTimestamp:2020-01-01 12:57:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 3e33e123-64ae-407a-abb0-1e789454bd03 0xc002d708d7 0xc002d708d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-wb924 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wb924,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-wb924 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002d70940} {node.kubernetes.io/unreachable Exists NoExecute 0xc002d70960}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:57:21 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:57:48 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:57:48 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:57:21 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.4,StartTime:2020-01-01 12:57:21 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-01 12:57:47 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://1da28742bb4490bd6788f9d90457679ece6f18cac153ae3b79d1f2bba589393a}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 1 12:58:07.234: INFO: Pod "nginx-deployment-7b8c6f4498-ftcqz" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-ftcqz,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8141,SelfLink:/api/v1/namespaces/deployment-8141/pods/nginx-deployment-7b8c6f4498-ftcqz,UID:4e0d8d2e-b936-4acc-9ac4-5504c245218e,ResourceVersion:18891895,Generation:0,CreationTimestamp:2020-01-01 12:58:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 3e33e123-64ae-407a-abb0-1e789454bd03 0xc002d70a37 0xc002d70a38}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-wb924 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wb924,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-wb924 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002d70ab0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002d70ad0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:58:02 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 1 12:58:07.235: INFO: Pod "nginx-deployment-7b8c6f4498-h2r2b" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-h2r2b,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8141,SelfLink:/api/v1/namespaces/deployment-8141/pods/nginx-deployment-7b8c6f4498-h2r2b,UID:d7e769a0-ecfd-42e0-97d2-fb9da409c00d,ResourceVersion:18891786,Generation:0,CreationTimestamp:2020-01-01 12:57:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 3e33e123-64ae-407a-abb0-1e789454bd03 0xc002d70b57 0xc002d70b58}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-wb924 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wb924,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-wb924 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002d70bd0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002d70bf0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:57:21 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:57:54 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:57:54 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:57:21 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2020-01-01 12:57:21 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-01 12:57:47 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://9c1902cff5be94f211ab15920f7e1afce5b5efb28f1c9765e467b4486bb7c937}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 1 12:58:07.236: INFO: Pod "nginx-deployment-7b8c6f4498-hlf2w" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-hlf2w,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8141,SelfLink:/api/v1/namespaces/deployment-8141/pods/nginx-deployment-7b8c6f4498-hlf2w,UID:a9be2826-8bb5-4f46-9e60-f82b4c42306c,ResourceVersion:18891902,Generation:0,CreationTimestamp:2020-01-01 12:58:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 3e33e123-64ae-407a-abb0-1e789454bd03 0xc002d70cc7 0xc002d70cc8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-wb924 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wb924,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-wb924 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002d70d30} {node.kubernetes.io/unreachable Exists NoExecute 0xc002d70d50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:58:02 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 1 12:58:07.237: INFO: Pod "nginx-deployment-7b8c6f4498-jmwjh" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-jmwjh,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8141,SelfLink:/api/v1/namespaces/deployment-8141/pods/nginx-deployment-7b8c6f4498-jmwjh,UID:467924ec-6a41-47ed-968e-46333620f686,ResourceVersion:18891792,Generation:0,CreationTimestamp:2020-01-01 12:57:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 3e33e123-64ae-407a-abb0-1e789454bd03 0xc002d70dd7 0xc002d70dd8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-wb924 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wb924,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-wb924 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002d70e50} {node.kubernetes.io/unreachable Exists NoExecute 0xc002d70e70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:57:21 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:57:54 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:57:54 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:57:21 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.3,StartTime:2020-01-01 12:57:21 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-01 12:57:51 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://c2cea0c6f84c2b411d1bbac18a7c8e0ee91c9fb6a224de29e584c12a84208779}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 1 12:58:07.242: INFO: Pod "nginx-deployment-7b8c6f4498-kfstf" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-kfstf,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8141,SelfLink:/api/v1/namespaces/deployment-8141/pods/nginx-deployment-7b8c6f4498-kfstf,UID:f88d0500-84e1-4d6b-a83e-c6e3050590a6,ResourceVersion:18891886,Generation:0,CreationTimestamp:2020-01-01 12:58:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 3e33e123-64ae-407a-abb0-1e789454bd03 0xc002d70f47 0xc002d70f48}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-wb924 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wb924,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-wb924 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002d70fb0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002d70fd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:58:02 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 1 12:58:07.244: INFO: Pod "nginx-deployment-7b8c6f4498-kx8xm" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-kx8xm,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8141,SelfLink:/api/v1/namespaces/deployment-8141/pods/nginx-deployment-7b8c6f4498-kx8xm,UID:1e681947-644a-4004-a204-6ca45804486d,ResourceVersion:18891897,Generation:0,CreationTimestamp:2020-01-01 12:58:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 3e33e123-64ae-407a-abb0-1e789454bd03 0xc002d71057 0xc002d71058}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-wb924 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wb924,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-wb924 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002d710d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002d710f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:58:02 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 1 12:58:07.245: INFO: Pod "nginx-deployment-7b8c6f4498-lvtc6" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-lvtc6,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8141,SelfLink:/api/v1/namespaces/deployment-8141/pods/nginx-deployment-7b8c6f4498-lvtc6,UID:8d6292c4-69a2-4c86-9b47-2f8750efffef,ResourceVersion:18891746,Generation:0,CreationTimestamp:2020-01-01 12:57:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 3e33e123-64ae-407a-abb0-1e789454bd03 0xc002d71177 0xc002d71178}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-wb924 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wb924,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-wb924 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002d711e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002d71200}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:57:21 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:57:47 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:57:47 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:57:21 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.7,StartTime:2020-01-01 12:57:21 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-01 12:57:47 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://78d0a9af3e65a3160a3f1a497f5fd8fa24f1796db709cb9504f02230f5ffb892}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 1 12:58:07.245: INFO: Pod "nginx-deployment-7b8c6f4498-qg4g6" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-qg4g6,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8141,SelfLink:/api/v1/namespaces/deployment-8141/pods/nginx-deployment-7b8c6f4498-qg4g6,UID:ba3a7422-d429-4f27-b375-65dd659b7135,ResourceVersion:18891866,Generation:0,CreationTimestamp:2020-01-01 12:58:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 3e33e123-64ae-407a-abb0-1e789454bd03 0xc002d712d7 0xc002d712d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-wb924 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wb924,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-wb924 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002d71350} {node.kubernetes.io/unreachable Exists NoExecute 0xc002d71370}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:58:01 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 1 12:58:07.245: INFO: Pod "nginx-deployment-7b8c6f4498-v7xkb" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-v7xkb,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8141,SelfLink:/api/v1/namespaces/deployment-8141/pods/nginx-deployment-7b8c6f4498-v7xkb,UID:1ee0202f-4241-4bd1-a9a7-88981c6e790a,ResourceVersion:18891914,Generation:0,CreationTimestamp:2020-01-01 12:58:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 3e33e123-64ae-407a-abb0-1e789454bd03 0xc002d713f7 0xc002d713f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-wb924 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wb924,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-wb924 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002d71460} {node.kubernetes.io/unreachable Exists NoExecute 0xc002d71480}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:58:02 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:58:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:58:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:58:01 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-01-01 12:58:02 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 1 12:58:07.245: INFO: Pod "nginx-deployment-7b8c6f4498-w9pc6" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-w9pc6,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8141,SelfLink:/api/v1/namespaces/deployment-8141/pods/nginx-deployment-7b8c6f4498-w9pc6,UID:0a601eb7-1bf3-49b6-9973-9a43a9d22389,ResourceVersion:18891753,Generation:0,CreationTimestamp:2020-01-01 12:57:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 3e33e123-64ae-407a-abb0-1e789454bd03 0xc002d71547 0xc002d71548}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-wb924 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wb924,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-wb924 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002d715b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002d715d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:57:21 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:57:48 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:57:48 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 12:57:21 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.6,StartTime:2020-01-01 12:57:21 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-01 12:57:47 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://e8f45b9895f066c6e6d2b3008c650c96f6a6163ed553a2d5bb6625f55ba6b259}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 1 12:58:07.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-8141" for this suite. Jan 1 12:59:38.569: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 12:59:40.219: INFO: namespace deployment-8141 deletion completed in 1m31.475780769s • [SLOW TEST:139.578 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 1 12:59:40.222: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a replication controller Jan 1 12:59:42.548: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1246' Jan 1 12:59:47.323: INFO: stderr: "" Jan 1 12:59:47.323: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 1 12:59:47.323: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1246' Jan 1 12:59:48.380: INFO: stderr: "" Jan 1 12:59:48.381: INFO: stdout: "" STEP: Replicas for name=update-demo: expected=2 actual=0 Jan 1 12:59:53.381: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1246' Jan 1 12:59:53.855: INFO: stderr: "" Jan 1 12:59:53.856: INFO: stdout: "update-demo-nautilus-jz9ms update-demo-nautilus-xwhp4 " Jan 1 12:59:53.856: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jz9ms -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1246' Jan 1 12:59:54.349: INFO: stderr: "" Jan 1 12:59:54.350: INFO: stdout: "" Jan 1 12:59:54.350: INFO: update-demo-nautilus-jz9ms is created but not running Jan 1 12:59:59.351: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1246' Jan 1 12:59:59.555: INFO: stderr: "" Jan 1 12:59:59.555: INFO: stdout: "update-demo-nautilus-jz9ms update-demo-nautilus-xwhp4 " Jan 1 12:59:59.555: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jz9ms -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1246' Jan 1 12:59:59.661: INFO: stderr: "" Jan 1 12:59:59.661: INFO: stdout: "" Jan 1 12:59:59.661: INFO: update-demo-nautilus-jz9ms is created but not running Jan 1 13:00:04.662: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1246' Jan 1 13:00:05.062: INFO: stderr: "" Jan 1 13:00:05.062: INFO: stdout: "update-demo-nautilus-jz9ms update-demo-nautilus-xwhp4 " Jan 1 13:00:05.062: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jz9ms -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1246' Jan 1 13:00:05.471: INFO: stderr: "" Jan 1 13:00:05.471: INFO: stdout: "" Jan 1 13:00:05.471: INFO: update-demo-nautilus-jz9ms is created but not running Jan 1 13:00:10.473: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1246' Jan 1 13:00:10.674: INFO: stderr: "" Jan 1 13:00:10.674: INFO: stdout: "update-demo-nautilus-jz9ms update-demo-nautilus-xwhp4 " Jan 1 13:00:10.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jz9ms -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1246' Jan 1 13:00:10.792: INFO: stderr: "" Jan 1 13:00:10.792: INFO: stdout: "" Jan 1 13:00:10.792: INFO: update-demo-nautilus-jz9ms is created but not running Jan 1 13:00:15.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1246' Jan 1 13:00:16.006: INFO: stderr: "" Jan 1 13:00:16.007: INFO: stdout: "update-demo-nautilus-jz9ms update-demo-nautilus-xwhp4 " Jan 1 13:00:16.010: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jz9ms -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1246' Jan 1 13:00:16.111: INFO: stderr: "" Jan 1 13:00:16.111: INFO: stdout: "true" Jan 1 13:00:16.112: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jz9ms -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1246' Jan 1 13:00:16.306: INFO: stderr: "" Jan 1 13:00:16.306: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 1 13:00:16.306: INFO: validating pod update-demo-nautilus-jz9ms Jan 1 13:00:16.421: INFO: got data: { "image": "nautilus.jpg" } Jan 1 13:00:16.421: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 1 13:00:16.422: INFO: update-demo-nautilus-jz9ms is verified up and running Jan 1 13:00:16.422: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xwhp4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1246' Jan 1 13:00:16.527: INFO: stderr: "" Jan 1 13:00:16.528: INFO: stdout: "true" Jan 1 13:00:16.528: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xwhp4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1246' Jan 1 13:00:16.651: INFO: stderr: "" Jan 1 13:00:16.651: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 1 13:00:16.652: INFO: validating pod update-demo-nautilus-xwhp4 Jan 1 13:00:16.674: INFO: got data: { "image": "nautilus.jpg" } Jan 1 13:00:16.674: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 1 13:00:16.674: INFO: update-demo-nautilus-xwhp4 is verified up and running STEP: scaling down the replication controller Jan 1 13:00:16.684: INFO: scanned /root for discovery docs: Jan 1 13:00:16.684: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-1246' Jan 1 13:00:17.889: INFO: stderr: "" Jan 1 13:00:17.889: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 1 13:00:17.889: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1246' Jan 1 13:00:18.137: INFO: stderr: "" Jan 1 13:00:18.137: INFO: stdout: "update-demo-nautilus-jz9ms update-demo-nautilus-xwhp4 " STEP: Replicas for name=update-demo: expected=1 actual=2 Jan 1 13:00:23.138: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1246' Jan 1 13:00:23.286: INFO: stderr: "" Jan 1 13:00:23.287: INFO: stdout: "update-demo-nautilus-jz9ms update-demo-nautilus-xwhp4 " STEP: Replicas for name=update-demo: expected=1 actual=2 Jan 1 13:00:28.289: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1246' Jan 1 13:00:28.439: INFO: stderr: "" Jan 1 13:00:28.439: INFO: stdout: "update-demo-nautilus-xwhp4 " Jan 1 13:00:28.440: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xwhp4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1246' Jan 1 13:00:28.559: INFO: stderr: "" Jan 1 13:00:28.559: INFO: stdout: "true" Jan 1 13:00:28.560: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xwhp4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1246' Jan 1 13:00:28.671: INFO: stderr: "" Jan 1 13:00:28.671: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 1 13:00:28.671: INFO: validating pod update-demo-nautilus-xwhp4 Jan 1 13:00:28.675: INFO: got data: { "image": "nautilus.jpg" } Jan 1 13:00:28.675: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 1 13:00:28.675: INFO: update-demo-nautilus-xwhp4 is verified up and running STEP: scaling up the replication controller Jan 1 13:00:28.678: INFO: scanned /root for discovery docs: Jan 1 13:00:28.678: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-1246' Jan 1 13:00:29.934: INFO: stderr: "" Jan 1 13:00:29.935: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 1 13:00:29.935: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1246' Jan 1 13:00:30.150: INFO: stderr: "" Jan 1 13:00:30.150: INFO: stdout: "update-demo-nautilus-8zzfz update-demo-nautilus-xwhp4 " Jan 1 13:00:30.151: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8zzfz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1246' Jan 1 13:00:30.253: INFO: stderr: "" Jan 1 13:00:30.253: INFO: stdout: "" Jan 1 13:00:30.253: INFO: update-demo-nautilus-8zzfz is created but not running Jan 1 13:00:35.254: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1246' Jan 1 13:00:35.502: INFO: stderr: "" Jan 1 13:00:35.502: INFO: stdout: "update-demo-nautilus-8zzfz update-demo-nautilus-xwhp4 " Jan 1 13:00:35.502: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8zzfz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1246' Jan 1 13:00:35.697: INFO: stderr: "" Jan 1 13:00:35.697: INFO: stdout: "" Jan 1 13:00:35.697: INFO: update-demo-nautilus-8zzfz is created but not running Jan 1 13:00:40.698: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1246' Jan 1 13:00:40.845: INFO: stderr: "" Jan 1 13:00:40.845: INFO: stdout: "update-demo-nautilus-8zzfz update-demo-nautilus-xwhp4 " Jan 1 13:00:40.846: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8zzfz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1246' Jan 1 13:00:40.999: INFO: stderr: "" Jan 1 13:00:41.000: INFO: stdout: "true" Jan 1 13:00:41.000: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8zzfz -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1246' Jan 1 13:00:41.147: INFO: stderr: "" Jan 1 13:00:41.148: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 1 13:00:41.148: INFO: validating pod update-demo-nautilus-8zzfz Jan 1 13:00:41.158: INFO: got data: { "image": "nautilus.jpg" } Jan 1 13:00:41.158: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 1 13:00:41.158: INFO: update-demo-nautilus-8zzfz is verified up and running Jan 1 13:00:41.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xwhp4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1246' Jan 1 13:00:41.243: INFO: stderr: "" Jan 1 13:00:41.243: INFO: stdout: "true" Jan 1 13:00:41.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xwhp4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1246' Jan 1 13:00:41.349: INFO: stderr: "" Jan 1 13:00:41.349: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 1 13:00:41.349: INFO: validating pod update-demo-nautilus-xwhp4 Jan 1 13:00:41.353: INFO: got data: { "image": "nautilus.jpg" } Jan 1 13:00:41.353: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 1 13:00:41.353: INFO: update-demo-nautilus-xwhp4 is verified up and running STEP: using delete to clean up resources Jan 1 13:00:41.353: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1246' Jan 1 13:00:41.572: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 1 13:00:41.572: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Jan 1 13:00:41.573: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-1246' Jan 1 13:00:41.740: INFO: stderr: "No resources found.\n" Jan 1 13:00:41.740: INFO: stdout: "" Jan 1 13:00:41.741: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-1246 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 1 13:00:41.942: INFO: stderr: "" Jan 1 13:00:41.942: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 1 13:00:41.942: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1246" for this suite. Jan 1 13:01:06.022: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 13:01:06.179: INFO: namespace kubectl-1246 deletion completed in 24.223493958s • [SLOW TEST:85.958 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 1 13:01:06.182: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Jan 1 13:01:22.388: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec pod-sharedvolume-1dc0787b-af6e-4050-95e9-f3b1b77ce9f7 -c busybox-main-container --namespace=emptydir-4030 -- cat /usr/share/volumeshare/shareddata.txt' Jan 1 13:01:23.125: INFO: stderr: "" Jan 1 13:01:23.125: INFO: stdout: "Hello from the busy-box sub-container\n" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 1 13:01:23.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4030" for this suite. Jan 1 13:01:29.208: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 13:01:29.292: INFO: namespace emptydir-4030 deletion completed in 6.145639922s • [SLOW TEST:23.111 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 1 13:01:29.293: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Jan 1 13:01:42.557: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 1 13:01:43.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-8417" for this suite. Jan 1 13:02:07.712: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 13:02:07.928: INFO: namespace replicaset-8417 deletion completed in 24.278383731s • [SLOW TEST:38.635 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 1 13:02:07.929: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap configmap-3292/configmap-test-ec15bee4-8320-45eb-935d-7e38643844e2 STEP: Creating a pod to test consume configMaps Jan 1 13:02:08.075: INFO: Waiting up to 5m0s for pod "pod-configmaps-8d707d61-8312-4f88-82fd-7d29ebd169e1" in namespace "configmap-3292" to be "success or failure" Jan 1 13:02:08.081: INFO: Pod "pod-configmaps-8d707d61-8312-4f88-82fd-7d29ebd169e1": Phase="Pending", Reason="", readiness=false. Elapsed: 5.492783ms Jan 1 13:02:10.120: INFO: Pod "pod-configmaps-8d707d61-8312-4f88-82fd-7d29ebd169e1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044738129s Jan 1 13:02:12.129: INFO: Pod "pod-configmaps-8d707d61-8312-4f88-82fd-7d29ebd169e1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053734896s Jan 1 13:02:14.156: INFO: Pod "pod-configmaps-8d707d61-8312-4f88-82fd-7d29ebd169e1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.080802463s Jan 1 13:02:16.166: INFO: Pod "pod-configmaps-8d707d61-8312-4f88-82fd-7d29ebd169e1": Phase="Pending", Reason="", readiness=false. Elapsed: 8.090883439s Jan 1 13:02:18.176: INFO: Pod "pod-configmaps-8d707d61-8312-4f88-82fd-7d29ebd169e1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.100866358s STEP: Saw pod success Jan 1 13:02:18.177: INFO: Pod "pod-configmaps-8d707d61-8312-4f88-82fd-7d29ebd169e1" satisfied condition "success or failure" Jan 1 13:02:18.181: INFO: Trying to get logs from node iruya-node pod pod-configmaps-8d707d61-8312-4f88-82fd-7d29ebd169e1 container env-test: STEP: delete the pod Jan 1 13:02:18.246: INFO: Waiting for pod pod-configmaps-8d707d61-8312-4f88-82fd-7d29ebd169e1 to disappear Jan 1 13:02:18.320: INFO: Pod pod-configmaps-8d707d61-8312-4f88-82fd-7d29ebd169e1 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 1 13:02:18.321: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3292" for this suite. Jan 1 13:02:24.365: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 13:02:24.541: INFO: namespace configmap-3292 deletion completed in 6.206477364s • [SLOW TEST:16.612 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 1 13:02:24.544: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 1 13:02:24.673: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c991a189-3969-4c60-9402-a408c8c1ebc0" in namespace "projected-1390" to be "success or failure" Jan 1 13:02:24.679: INFO: Pod "downwardapi-volume-c991a189-3969-4c60-9402-a408c8c1ebc0": Phase="Pending", Reason="", readiness=false. Elapsed: 5.820881ms Jan 1 13:02:26.691: INFO: Pod "downwardapi-volume-c991a189-3969-4c60-9402-a408c8c1ebc0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018015398s Jan 1 13:02:28.724: INFO: Pod "downwardapi-volume-c991a189-3969-4c60-9402-a408c8c1ebc0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.05057832s Jan 1 13:02:30.734: INFO: Pod "downwardapi-volume-c991a189-3969-4c60-9402-a408c8c1ebc0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.060583105s Jan 1 13:02:32.741: INFO: Pod "downwardapi-volume-c991a189-3969-4c60-9402-a408c8c1ebc0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.067358792s Jan 1 13:02:34.758: INFO: Pod "downwardapi-volume-c991a189-3969-4c60-9402-a408c8c1ebc0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.084398772s STEP: Saw pod success Jan 1 13:02:34.758: INFO: Pod "downwardapi-volume-c991a189-3969-4c60-9402-a408c8c1ebc0" satisfied condition "success or failure" Jan 1 13:02:34.763: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-c991a189-3969-4c60-9402-a408c8c1ebc0 container client-container: STEP: delete the pod Jan 1 13:02:34.910: INFO: Waiting for pod downwardapi-volume-c991a189-3969-4c60-9402-a408c8c1ebc0 to disappear Jan 1 13:02:34.948: INFO: Pod downwardapi-volume-c991a189-3969-4c60-9402-a408c8c1ebc0 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 1 13:02:34.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1390" for this suite. Jan 1 13:02:40.981: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 13:02:41.110: INFO: namespace projected-1390 deletion completed in 6.152948125s • [SLOW TEST:16.567 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 1 13:02:41.111: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 1 13:02:47.659: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-1289" for this suite. Jan 1 13:02:53.687: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 13:02:53.823: INFO: namespace namespaces-1289 deletion completed in 6.156761154s STEP: Destroying namespace "nsdeletetest-1193" for this suite. Jan 1 13:02:53.825: INFO: Namespace nsdeletetest-1193 was already deleted STEP: Destroying namespace "nsdeletetest-5075" for this suite. Jan 1 13:02:59.898: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 13:02:59.996: INFO: namespace nsdeletetest-5075 deletion completed in 6.171583097s • [SLOW TEST:18.885 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 1 13:02:59.997: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 1 13:03:00.179: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 1 13:03:10.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8377" for this suite. Jan 1 13:03:54.418: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 13:03:54.578: INFO: namespace pods-8377 deletion completed in 44.17913901s • [SLOW TEST:54.581 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 1 13:03:54.579: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Jan 1 13:04:02.737: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-5b7d6918-a0ef-49ca-a1a8-9cfb2117016d,GenerateName:,Namespace:events-2223,SelfLink:/api/v1/namespaces/events-2223/pods/send-events-5b7d6918-a0ef-49ca-a1a8-9cfb2117016d,UID:4064769e-cf4d-478e-a60a-37b334618881,ResourceVersion:18892902,Generation:0,CreationTimestamp:2020-01-01 13:03:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 698243422,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-pbqzq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-pbqzq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] [] [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-pbqzq true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002740d30} {node.kubernetes.io/unreachable Exists NoExecute 0xc002740d50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 13:03:54 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 13:04:02 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 13:04:02 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 13:03:54 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2020-01-01 13:03:54 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-01-01 13:04:01 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 docker-pullable://gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 docker://81664bc138ca805bae291347998015413779f55b9e27511d8b2c078ad5e2824d}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} STEP: checking for scheduler event about the pod Jan 1 13:04:04.745: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Jan 1 13:04:06.765: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 1 13:04:06.795: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-2223" for this suite. Jan 1 13:04:44.956: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 13:04:45.046: INFO: namespace events-2223 deletion completed in 38.115280291s • [SLOW TEST:50.468 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 1 13:04:45.047: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-d1584489-01ee-45c5-9520-9004d9de2f47 STEP: Creating a pod to test consume secrets Jan 1 13:04:45.199: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-38664428-a060-4242-b655-47c2b27dc90c" in namespace "projected-9394" to be "success or failure" Jan 1 13:04:45.209: INFO: Pod "pod-projected-secrets-38664428-a060-4242-b655-47c2b27dc90c": Phase="Pending", Reason="", readiness=false. Elapsed: 10.294179ms Jan 1 13:04:47.216: INFO: Pod "pod-projected-secrets-38664428-a060-4242-b655-47c2b27dc90c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017215628s Jan 1 13:04:49.225: INFO: Pod "pod-projected-secrets-38664428-a060-4242-b655-47c2b27dc90c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026139287s Jan 1 13:04:51.237: INFO: Pod "pod-projected-secrets-38664428-a060-4242-b655-47c2b27dc90c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.038526041s Jan 1 13:04:53.248: INFO: Pod "pod-projected-secrets-38664428-a060-4242-b655-47c2b27dc90c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.049109889s Jan 1 13:04:55.257: INFO: Pod "pod-projected-secrets-38664428-a060-4242-b655-47c2b27dc90c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.058289503s STEP: Saw pod success Jan 1 13:04:55.258: INFO: Pod "pod-projected-secrets-38664428-a060-4242-b655-47c2b27dc90c" satisfied condition "success or failure" Jan 1 13:04:55.263: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-38664428-a060-4242-b655-47c2b27dc90c container projected-secret-volume-test: STEP: delete the pod Jan 1 13:04:55.876: INFO: Waiting for pod pod-projected-secrets-38664428-a060-4242-b655-47c2b27dc90c to disappear Jan 1 13:04:55.894: INFO: Pod pod-projected-secrets-38664428-a060-4242-b655-47c2b27dc90c no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 1 13:04:55.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9394" for this suite. Jan 1 13:05:01.989: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 13:05:02.124: INFO: namespace projected-9394 deletion completed in 6.212525252s • [SLOW TEST:17.077 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 1 13:05:02.124: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 1 13:05:02.230: INFO: Waiting up to 5m0s for pod "downwardapi-volume-257b87a7-707e-4885-8480-afd3f98a6ec3" in namespace "projected-9674" to be "success or failure" Jan 1 13:05:02.281: INFO: Pod "downwardapi-volume-257b87a7-707e-4885-8480-afd3f98a6ec3": Phase="Pending", Reason="", readiness=false. Elapsed: 50.071729ms Jan 1 13:05:04.289: INFO: Pod "downwardapi-volume-257b87a7-707e-4885-8480-afd3f98a6ec3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057832183s Jan 1 13:05:06.300: INFO: Pod "downwardapi-volume-257b87a7-707e-4885-8480-afd3f98a6ec3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.068966577s Jan 1 13:05:08.314: INFO: Pod "downwardapi-volume-257b87a7-707e-4885-8480-afd3f98a6ec3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.083115261s Jan 1 13:05:10.537: INFO: Pod "downwardapi-volume-257b87a7-707e-4885-8480-afd3f98a6ec3": Phase="Pending", Reason="", readiness=false. Elapsed: 8.306431373s Jan 1 13:05:12.555: INFO: Pod "downwardapi-volume-257b87a7-707e-4885-8480-afd3f98a6ec3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.324043191s STEP: Saw pod success Jan 1 13:05:12.555: INFO: Pod "downwardapi-volume-257b87a7-707e-4885-8480-afd3f98a6ec3" satisfied condition "success or failure" Jan 1 13:05:12.565: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-257b87a7-707e-4885-8480-afd3f98a6ec3 container client-container: STEP: delete the pod Jan 1 13:05:12.678: INFO: Waiting for pod downwardapi-volume-257b87a7-707e-4885-8480-afd3f98a6ec3 to disappear Jan 1 13:05:12.738: INFO: Pod downwardapi-volume-257b87a7-707e-4885-8480-afd3f98a6ec3 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 1 13:05:12.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9674" for this suite. Jan 1 13:05:18.769: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 1 13:05:18.936: INFO: namespace projected-9674 deletion completed in 6.190410783s • [SLOW TEST:16.812 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 1 13:05:18.938: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 1 13:05:19.119: INFO: (0) /api/v1/nodes/iruya-node:10250/proxy/logs/:
alternatives.log
alternatives.l... (200; 18.563764ms)
Jan  1 13:05:19.132: INFO: (1) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 13.490602ms)
Jan  1 13:05:19.138: INFO: (2) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.860239ms)
Jan  1 13:05:19.143: INFO: (3) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.234948ms)
Jan  1 13:05:19.150: INFO: (4) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.363277ms)
Jan  1 13:05:19.157: INFO: (5) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.520899ms)
Jan  1 13:05:19.163: INFO: (6) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.981476ms)
Jan  1 13:05:19.169: INFO: (7) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.580479ms)
Jan  1 13:05:19.175: INFO: (8) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.70315ms)
Jan  1 13:05:19.183: INFO: (9) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.81717ms)
Jan  1 13:05:19.189: INFO: (10) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.062331ms)
Jan  1 13:05:19.195: INFO: (11) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.795371ms)
Jan  1 13:05:19.200: INFO: (12) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.382921ms)
Jan  1 13:05:19.206: INFO: (13) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.49448ms)
Jan  1 13:05:19.211: INFO: (14) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.222326ms)
Jan  1 13:05:19.218: INFO: (15) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.075641ms)
Jan  1 13:05:19.223: INFO: (16) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.082534ms)
Jan  1 13:05:19.230: INFO: (17) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.563509ms)
Jan  1 13:05:19.235: INFO: (18) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.707805ms)
Jan  1 13:05:19.242: INFO: (19) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.531045ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 13:05:19.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-2837" for this suite.
Jan  1 13:05:25.282: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 13:05:25.407: INFO: namespace proxy-2837 deletion completed in 6.158391162s

• [SLOW TEST:6.469 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 13:05:25.407: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-f6f21d53-be85-446e-b999-896406b68d60
STEP: Creating a pod to test consume secrets
Jan  1 13:05:25.568: INFO: Waiting up to 5m0s for pod "pod-secrets-30640b5e-2539-4941-929b-07f901d8faf8" in namespace "secrets-9177" to be "success or failure"
Jan  1 13:05:25.585: INFO: Pod "pod-secrets-30640b5e-2539-4941-929b-07f901d8faf8": Phase="Pending", Reason="", readiness=false. Elapsed: 16.865974ms
Jan  1 13:05:27.643: INFO: Pod "pod-secrets-30640b5e-2539-4941-929b-07f901d8faf8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0743173s
Jan  1 13:05:29.659: INFO: Pod "pod-secrets-30640b5e-2539-4941-929b-07f901d8faf8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.090819274s
Jan  1 13:05:31.733: INFO: Pod "pod-secrets-30640b5e-2539-4941-929b-07f901d8faf8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.164327329s
Jan  1 13:05:33.751: INFO: Pod "pod-secrets-30640b5e-2539-4941-929b-07f901d8faf8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.182305627s
Jan  1 13:05:35.762: INFO: Pod "pod-secrets-30640b5e-2539-4941-929b-07f901d8faf8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.193342066s
STEP: Saw pod success
Jan  1 13:05:35.762: INFO: Pod "pod-secrets-30640b5e-2539-4941-929b-07f901d8faf8" satisfied condition "success or failure"
Jan  1 13:05:35.767: INFO: Trying to get logs from node iruya-node pod pod-secrets-30640b5e-2539-4941-929b-07f901d8faf8 container secret-volume-test: 
STEP: delete the pod
Jan  1 13:05:35.896: INFO: Waiting for pod pod-secrets-30640b5e-2539-4941-929b-07f901d8faf8 to disappear
Jan  1 13:05:35.909: INFO: Pod pod-secrets-30640b5e-2539-4941-929b-07f901d8faf8 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 13:05:35.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9177" for this suite.
Jan  1 13:05:42.012: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 13:05:42.126: INFO: namespace secrets-9177 deletion completed in 6.211006355s

• [SLOW TEST:16.719 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 13:05:42.128: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Jan  1 13:05:42.411: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan  1 13:05:42.425: INFO: Waiting for terminating namespaces to be deleted...
Jan  1 13:05:42.428: INFO: 
Logging pods the kubelet thinks is on node iruya-node before test
Jan  1 13:05:42.448: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded)
Jan  1 13:05:42.448: INFO: 	Container weave ready: true, restart count 0
Jan  1 13:05:42.448: INFO: 	Container weave-npc ready: true, restart count 0
Jan  1 13:05:42.448: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded)
Jan  1 13:05:42.448: INFO: 	Container kube-proxy ready: true, restart count 0
Jan  1 13:05:42.448: INFO: 
Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test
Jan  1 13:05:42.462: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded)
Jan  1 13:05:42.462: INFO: 	Container kube-apiserver ready: true, restart count 0
Jan  1 13:05:42.462: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded)
Jan  1 13:05:42.462: INFO: 	Container kube-scheduler ready: true, restart count 10
Jan  1 13:05:42.462: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Jan  1 13:05:42.462: INFO: 	Container coredns ready: true, restart count 0
Jan  1 13:05:42.462: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Jan  1 13:05:42.462: INFO: 	Container coredns ready: true, restart count 0
Jan  1 13:05:42.462: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded)
Jan  1 13:05:42.462: INFO: 	Container etcd ready: true, restart count 0
Jan  1 13:05:42.462: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded)
Jan  1 13:05:42.462: INFO: 	Container weave ready: true, restart count 0
Jan  1 13:05:42.462: INFO: 	Container weave-npc ready: true, restart count 0
Jan  1 13:05:42.462: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded)
Jan  1 13:05:42.462: INFO: 	Container kube-controller-manager ready: true, restart count 14
Jan  1 13:05:42.462: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded)
Jan  1 13:05:42.462: INFO: 	Container kube-proxy ready: true, restart count 0
[It] validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-b1d6bb0b-b931-43e2-8c3d-af925e85076d 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-b1d6bb0b-b931-43e2-8c3d-af925e85076d off the node iruya-node
STEP: verifying the node doesn't have the label kubernetes.io/e2e-b1d6bb0b-b931-43e2-8c3d-af925e85076d
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 13:06:00.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-7552" for this suite.
Jan  1 13:06:14.818: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 13:06:14.975: INFO: namespace sched-pred-7552 deletion completed in 14.227789378s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:32.847 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 13:06:14.976: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-484511fb-2020-4bd5-898b-58be4282aa5d
STEP: Creating a pod to test consume configMaps
Jan  1 13:06:15.096: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-dba07962-026d-49df-8c1b-29a6faeed269" in namespace "projected-3900" to be "success or failure"
Jan  1 13:06:15.105: INFO: Pod "pod-projected-configmaps-dba07962-026d-49df-8c1b-29a6faeed269": Phase="Pending", Reason="", readiness=false. Elapsed: 8.363058ms
Jan  1 13:06:17.110: INFO: Pod "pod-projected-configmaps-dba07962-026d-49df-8c1b-29a6faeed269": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013846891s
Jan  1 13:06:19.122: INFO: Pod "pod-projected-configmaps-dba07962-026d-49df-8c1b-29a6faeed269": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025501838s
Jan  1 13:06:21.132: INFO: Pod "pod-projected-configmaps-dba07962-026d-49df-8c1b-29a6faeed269": Phase="Pending", Reason="", readiness=false. Elapsed: 6.035153524s
Jan  1 13:06:23.138: INFO: Pod "pod-projected-configmaps-dba07962-026d-49df-8c1b-29a6faeed269": Phase="Pending", Reason="", readiness=false. Elapsed: 8.042105809s
Jan  1 13:06:25.148: INFO: Pod "pod-projected-configmaps-dba07962-026d-49df-8c1b-29a6faeed269": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.051222529s
STEP: Saw pod success
Jan  1 13:06:25.148: INFO: Pod "pod-projected-configmaps-dba07962-026d-49df-8c1b-29a6faeed269" satisfied condition "success or failure"
Jan  1 13:06:25.152: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-dba07962-026d-49df-8c1b-29a6faeed269 container projected-configmap-volume-test: 
STEP: delete the pod
Jan  1 13:06:25.267: INFO: Waiting for pod pod-projected-configmaps-dba07962-026d-49df-8c1b-29a6faeed269 to disappear
Jan  1 13:06:25.286: INFO: Pod pod-projected-configmaps-dba07962-026d-49df-8c1b-29a6faeed269 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 13:06:25.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3900" for this suite.
Jan  1 13:06:31.322: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 13:06:31.459: INFO: namespace projected-3900 deletion completed in 6.161123764s

• [SLOW TEST:16.483 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 13:06:31.459: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on tmpfs
Jan  1 13:06:31.624: INFO: Waiting up to 5m0s for pod "pod-c56cd107-b697-4dad-9cb1-a49ef4301333" in namespace "emptydir-8945" to be "success or failure"
Jan  1 13:06:31.631: INFO: Pod "pod-c56cd107-b697-4dad-9cb1-a49ef4301333": Phase="Pending", Reason="", readiness=false. Elapsed: 6.304669ms
Jan  1 13:06:33.643: INFO: Pod "pod-c56cd107-b697-4dad-9cb1-a49ef4301333": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018477775s
Jan  1 13:06:35.654: INFO: Pod "pod-c56cd107-b697-4dad-9cb1-a49ef4301333": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029688824s
Jan  1 13:06:37.664: INFO: Pod "pod-c56cd107-b697-4dad-9cb1-a49ef4301333": Phase="Pending", Reason="", readiness=false. Elapsed: 6.039582112s
Jan  1 13:06:39.676: INFO: Pod "pod-c56cd107-b697-4dad-9cb1-a49ef4301333": Phase="Pending", Reason="", readiness=false. Elapsed: 8.052036425s
Jan  1 13:06:41.686: INFO: Pod "pod-c56cd107-b697-4dad-9cb1-a49ef4301333": Phase="Pending", Reason="", readiness=false. Elapsed: 10.061768673s
Jan  1 13:06:43.698: INFO: Pod "pod-c56cd107-b697-4dad-9cb1-a49ef4301333": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.073647086s
STEP: Saw pod success
Jan  1 13:06:43.698: INFO: Pod "pod-c56cd107-b697-4dad-9cb1-a49ef4301333" satisfied condition "success or failure"
Jan  1 13:06:43.703: INFO: Trying to get logs from node iruya-node pod pod-c56cd107-b697-4dad-9cb1-a49ef4301333 container test-container: 
STEP: delete the pod
Jan  1 13:06:43.889: INFO: Waiting for pod pod-c56cd107-b697-4dad-9cb1-a49ef4301333 to disappear
Jan  1 13:06:43.968: INFO: Pod pod-c56cd107-b697-4dad-9cb1-a49ef4301333 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 13:06:43.969: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8945" for this suite.
Jan  1 13:06:50.074: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 13:06:50.519: INFO: namespace emptydir-8945 deletion completed in 6.502577223s

• [SLOW TEST:19.060 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 13:06:50.520: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Jan  1 13:06:59.915: INFO: Expected: &{} to match Container's Termination Message:  --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 13:07:00.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-4252" for this suite.
Jan  1 13:07:06.063: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 13:07:06.183: INFO: namespace container-runtime-4252 deletion completed in 6.148983614s

• [SLOW TEST:15.663 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 13:07:06.186: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 13:07:06.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-4320" for this suite.
Jan  1 13:07:12.402: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 13:07:12.513: INFO: namespace services-4320 deletion completed in 6.133111712s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:6.328 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 13:07:12.514: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan  1 13:07:12.791: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a9ac6f35-9921-4063-b94d-d07ec89c3707" in namespace "downward-api-3266" to be "success or failure"
Jan  1 13:07:12.806: INFO: Pod "downwardapi-volume-a9ac6f35-9921-4063-b94d-d07ec89c3707": Phase="Pending", Reason="", readiness=false. Elapsed: 13.978636ms
Jan  1 13:07:14.816: INFO: Pod "downwardapi-volume-a9ac6f35-9921-4063-b94d-d07ec89c3707": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02412406s
Jan  1 13:07:16.824: INFO: Pod "downwardapi-volume-a9ac6f35-9921-4063-b94d-d07ec89c3707": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03272983s
Jan  1 13:07:18.845: INFO: Pod "downwardapi-volume-a9ac6f35-9921-4063-b94d-d07ec89c3707": Phase="Pending", Reason="", readiness=false. Elapsed: 6.053020357s
Jan  1 13:07:20.859: INFO: Pod "downwardapi-volume-a9ac6f35-9921-4063-b94d-d07ec89c3707": Phase="Pending", Reason="", readiness=false. Elapsed: 8.067824681s
Jan  1 13:07:22.870: INFO: Pod "downwardapi-volume-a9ac6f35-9921-4063-b94d-d07ec89c3707": Phase="Pending", Reason="", readiness=false. Elapsed: 10.07819419s
Jan  1 13:07:24.889: INFO: Pod "downwardapi-volume-a9ac6f35-9921-4063-b94d-d07ec89c3707": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.097253317s
STEP: Saw pod success
Jan  1 13:07:24.889: INFO: Pod "downwardapi-volume-a9ac6f35-9921-4063-b94d-d07ec89c3707" satisfied condition "success or failure"
Jan  1 13:07:24.912: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-a9ac6f35-9921-4063-b94d-d07ec89c3707 container client-container: 
STEP: delete the pod
Jan  1 13:07:25.926: INFO: Waiting for pod downwardapi-volume-a9ac6f35-9921-4063-b94d-d07ec89c3707 to disappear
Jan  1 13:07:25.936: INFO: Pod downwardapi-volume-a9ac6f35-9921-4063-b94d-d07ec89c3707 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 13:07:25.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3266" for this suite.
Jan  1 13:07:32.193: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 13:07:32.398: INFO: namespace downward-api-3266 deletion completed in 6.454987675s

• [SLOW TEST:19.884 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 13:07:32.399: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan  1 13:07:32.500: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f3ec7215-3e3b-4aba-9c90-c5e39bb318b5" in namespace "downward-api-2443" to be "success or failure"
Jan  1 13:07:32.504: INFO: Pod "downwardapi-volume-f3ec7215-3e3b-4aba-9c90-c5e39bb318b5": Phase="Pending", Reason="", readiness=false. Elapsed: 3.997237ms
Jan  1 13:07:34.524: INFO: Pod "downwardapi-volume-f3ec7215-3e3b-4aba-9c90-c5e39bb318b5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023374592s
Jan  1 13:07:36.536: INFO: Pod "downwardapi-volume-f3ec7215-3e3b-4aba-9c90-c5e39bb318b5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036124272s
Jan  1 13:07:38.550: INFO: Pod "downwardapi-volume-f3ec7215-3e3b-4aba-9c90-c5e39bb318b5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.049399676s
Jan  1 13:07:40.565: INFO: Pod "downwardapi-volume-f3ec7215-3e3b-4aba-9c90-c5e39bb318b5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.064601328s
Jan  1 13:07:42.581: INFO: Pod "downwardapi-volume-f3ec7215-3e3b-4aba-9c90-c5e39bb318b5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.080728886s
STEP: Saw pod success
Jan  1 13:07:42.581: INFO: Pod "downwardapi-volume-f3ec7215-3e3b-4aba-9c90-c5e39bb318b5" satisfied condition "success or failure"
Jan  1 13:07:42.589: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-f3ec7215-3e3b-4aba-9c90-c5e39bb318b5 container client-container: 
STEP: delete the pod
Jan  1 13:07:42.680: INFO: Waiting for pod downwardapi-volume-f3ec7215-3e3b-4aba-9c90-c5e39bb318b5 to disappear
Jan  1 13:07:42.734: INFO: Pod downwardapi-volume-f3ec7215-3e3b-4aba-9c90-c5e39bb318b5 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 13:07:42.735: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2443" for this suite.
Jan  1 13:07:48.809: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 13:07:48.977: INFO: namespace downward-api-2443 deletion completed in 6.234925485s

• [SLOW TEST:16.578 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 13:07:48.977: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
Jan  1 13:07:49.200: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-2337,SelfLink:/api/v1/namespaces/watch-2337/configmaps/e2e-watch-test-watch-closed,UID:a40a0dda-3d7a-42aa-8e4c-358ce414ec26,ResourceVersion:18893468,Generation:0,CreationTimestamp:2020-01-01 13:07:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan  1 13:07:49.202: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-2337,SelfLink:/api/v1/namespaces/watch-2337/configmaps/e2e-watch-test-watch-closed,UID:a40a0dda-3d7a-42aa-8e4c-358ce414ec26,ResourceVersion:18893469,Generation:0,CreationTimestamp:2020-01-01 13:07:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
Jan  1 13:07:49.233: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-2337,SelfLink:/api/v1/namespaces/watch-2337/configmaps/e2e-watch-test-watch-closed,UID:a40a0dda-3d7a-42aa-8e4c-358ce414ec26,ResourceVersion:18893470,Generation:0,CreationTimestamp:2020-01-01 13:07:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan  1 13:07:49.234: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-2337,SelfLink:/api/v1/namespaces/watch-2337/configmaps/e2e-watch-test-watch-closed,UID:a40a0dda-3d7a-42aa-8e4c-358ce414ec26,ResourceVersion:18893471,Generation:0,CreationTimestamp:2020-01-01 13:07:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 13:07:49.234: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-2337" for this suite.
Jan  1 13:07:55.268: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 13:07:55.348: INFO: namespace watch-2337 deletion completed in 6.098299349s

• [SLOW TEST:6.371 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 13:07:55.348: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-5139
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan  1 13:07:55.451: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan  1 13:08:39.805: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:pod-network-test-5139 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  1 13:08:39.805: INFO: >>> kubeConfig: /root/.kube/config
Jan  1 13:08:40.371: INFO: Waiting for endpoints: map[]
Jan  1 13:08:40.385: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=udp&host=10.44.0.1&port=8081&tries=1'] Namespace:pod-network-test-5139 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  1 13:08:40.385: INFO: >>> kubeConfig: /root/.kube/config
Jan  1 13:08:40.733: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 13:08:40.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-5139" for this suite.
Jan  1 13:09:02.781: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 13:09:02.951: INFO: namespace pod-network-test-5139 deletion completed in 22.20594238s

• [SLOW TEST:67.603 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 13:09:02.953: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod busybox-01f2ab84-40c4-4224-8861-1b992816b3d3 in namespace container-probe-3130
Jan  1 13:09:11.119: INFO: Started pod busybox-01f2ab84-40c4-4224-8861-1b992816b3d3 in namespace container-probe-3130
STEP: checking the pod's current state and verifying that restartCount is present
Jan  1 13:09:11.121: INFO: Initial restart count of pod busybox-01f2ab84-40c4-4224-8861-1b992816b3d3 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 13:13:11.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-3130" for this suite.
Jan  1 13:13:17.625: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 13:13:17.791: INFO: namespace container-probe-3130 deletion completed in 6.205568346s

• [SLOW TEST:254.839 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 13:13:17.793: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan  1 13:13:18.003: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a91880ea-c8be-4788-95eb-00f32d64d83c" in namespace "projected-1528" to be "success or failure"
Jan  1 13:13:18.012: INFO: Pod "downwardapi-volume-a91880ea-c8be-4788-95eb-00f32d64d83c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.971479ms
Jan  1 13:13:20.023: INFO: Pod "downwardapi-volume-a91880ea-c8be-4788-95eb-00f32d64d83c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019458738s
Jan  1 13:13:22.030: INFO: Pod "downwardapi-volume-a91880ea-c8be-4788-95eb-00f32d64d83c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026677816s
Jan  1 13:13:24.048: INFO: Pod "downwardapi-volume-a91880ea-c8be-4788-95eb-00f32d64d83c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.044690411s
Jan  1 13:13:26.055: INFO: Pod "downwardapi-volume-a91880ea-c8be-4788-95eb-00f32d64d83c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.05140478s
Jan  1 13:13:28.061: INFO: Pod "downwardapi-volume-a91880ea-c8be-4788-95eb-00f32d64d83c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.057410276s
STEP: Saw pod success
Jan  1 13:13:28.061: INFO: Pod "downwardapi-volume-a91880ea-c8be-4788-95eb-00f32d64d83c" satisfied condition "success or failure"
Jan  1 13:13:28.064: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-a91880ea-c8be-4788-95eb-00f32d64d83c container client-container: 
STEP: delete the pod
Jan  1 13:13:28.201: INFO: Waiting for pod downwardapi-volume-a91880ea-c8be-4788-95eb-00f32d64d83c to disappear
Jan  1 13:13:28.214: INFO: Pod downwardapi-volume-a91880ea-c8be-4788-95eb-00f32d64d83c no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 13:13:28.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1528" for this suite.
Jan  1 13:13:34.259: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 13:13:34.412: INFO: namespace projected-1528 deletion completed in 6.184398273s

• [SLOW TEST:16.619 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 13:13:34.413: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name secret-emptykey-test-9b8346c6-9e41-409e-bf12-9de6f70e9a73
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 13:13:34.528: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1712" for this suite.
Jan  1 13:13:40.563: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 13:13:40.718: INFO: namespace secrets-1712 deletion completed in 6.184987651s

• [SLOW TEST:6.305 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl label 
  should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 13:13:40.719: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1210
STEP: creating the pod
Jan  1 13:13:40.818: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8734'
Jan  1 13:13:42.992: INFO: stderr: ""
Jan  1 13:13:42.992: INFO: stdout: "pod/pause created\n"
Jan  1 13:13:42.992: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
Jan  1 13:13:42.992: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-8734" to be "running and ready"
Jan  1 13:13:43.019: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 26.404503ms
Jan  1 13:13:45.028: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035943275s
Jan  1 13:13:47.035: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04278645s
Jan  1 13:13:49.046: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.053559973s
Jan  1 13:13:51.056: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 8.063859534s
Jan  1 13:13:53.063: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 10.070069318s
Jan  1 13:13:53.063: INFO: Pod "pause" satisfied condition "running and ready"
Jan  1 13:13:53.063: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: adding the label testing-label with value testing-label-value to a pod
Jan  1 13:13:53.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-8734'
Jan  1 13:13:53.259: INFO: stderr: ""
Jan  1 13:13:53.259: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
Jan  1 13:13:53.260: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-8734'
Jan  1 13:13:53.673: INFO: stderr: ""
Jan  1 13:13:53.674: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          11s   testing-label-value\n"
STEP: removing the label testing-label of a pod
Jan  1 13:13:53.676: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-8734'
Jan  1 13:13:53.836: INFO: stderr: ""
Jan  1 13:13:53.836: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
Jan  1 13:13:53.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-8734'
Jan  1 13:13:53.989: INFO: stderr: ""
Jan  1 13:13:53.989: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          11s   \n"
[AfterEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1217
STEP: using delete to clean up resources
Jan  1 13:13:53.990: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8734'
Jan  1 13:13:54.152: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  1 13:13:54.152: INFO: stdout: "pod \"pause\" force deleted\n"
Jan  1 13:13:54.153: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-8734'
Jan  1 13:13:54.297: INFO: stderr: "No resources found.\n"
Jan  1 13:13:54.297: INFO: stdout: ""
Jan  1 13:13:54.298: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-8734 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan  1 13:13:54.420: INFO: stderr: ""
Jan  1 13:13:54.420: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 13:13:54.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8734" for this suite.
Jan  1 13:14:00.485: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 13:14:00.660: INFO: namespace kubectl-8734 deletion completed in 6.225497688s

• [SLOW TEST:19.942 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should update the label on a resource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 13:14:00.661: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 13:14:08.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-8291" for this suite.
Jan  1 13:14:15.042: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 13:14:15.165: INFO: namespace emptydir-wrapper-8291 deletion completed in 6.233652319s

• [SLOW TEST:14.504 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 13:14:15.166: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-7e7fe5c8-75f5-493d-b0b3-4fa43d4273c4
STEP: Creating a pod to test consume configMaps
Jan  1 13:14:15.288: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-da6457aa-a9ca-478d-81be-7b15e1c72b47" in namespace "projected-3940" to be "success or failure"
Jan  1 13:14:15.315: INFO: Pod "pod-projected-configmaps-da6457aa-a9ca-478d-81be-7b15e1c72b47": Phase="Pending", Reason="", readiness=false. Elapsed: 26.734724ms
Jan  1 13:14:17.320: INFO: Pod "pod-projected-configmaps-da6457aa-a9ca-478d-81be-7b15e1c72b47": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032360399s
Jan  1 13:14:19.348: INFO: Pod "pod-projected-configmaps-da6457aa-a9ca-478d-81be-7b15e1c72b47": Phase="Pending", Reason="", readiness=false. Elapsed: 4.059760999s
Jan  1 13:14:21.378: INFO: Pod "pod-projected-configmaps-da6457aa-a9ca-478d-81be-7b15e1c72b47": Phase="Pending", Reason="", readiness=false. Elapsed: 6.089826171s
Jan  1 13:14:23.387: INFO: Pod "pod-projected-configmaps-da6457aa-a9ca-478d-81be-7b15e1c72b47": Phase="Pending", Reason="", readiness=false. Elapsed: 8.098838345s
Jan  1 13:14:25.394: INFO: Pod "pod-projected-configmaps-da6457aa-a9ca-478d-81be-7b15e1c72b47": Phase="Pending", Reason="", readiness=false. Elapsed: 10.106307196s
Jan  1 13:14:27.416: INFO: Pod "pod-projected-configmaps-da6457aa-a9ca-478d-81be-7b15e1c72b47": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.12742273s
STEP: Saw pod success
Jan  1 13:14:27.416: INFO: Pod "pod-projected-configmaps-da6457aa-a9ca-478d-81be-7b15e1c72b47" satisfied condition "success or failure"
Jan  1 13:14:27.424: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-da6457aa-a9ca-478d-81be-7b15e1c72b47 container projected-configmap-volume-test: 
STEP: delete the pod
Jan  1 13:14:27.538: INFO: Waiting for pod pod-projected-configmaps-da6457aa-a9ca-478d-81be-7b15e1c72b47 to disappear
Jan  1 13:14:27.545: INFO: Pod pod-projected-configmaps-da6457aa-a9ca-478d-81be-7b15e1c72b47 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 13:14:27.546: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3940" for this suite.
Jan  1 13:14:33.641: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 13:14:33.759: INFO: namespace projected-3940 deletion completed in 6.206468523s

• [SLOW TEST:18.593 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 13:14:33.760: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-fb5f220f-e792-4bb9-915f-93604c37d5d9
STEP: Creating a pod to test consume secrets
Jan  1 13:14:34.109: INFO: Waiting up to 5m0s for pod "pod-secrets-5dc558a3-d1e8-46c7-9bbd-001592afbf88" in namespace "secrets-8054" to be "success or failure"
Jan  1 13:14:34.119: INFO: Pod "pod-secrets-5dc558a3-d1e8-46c7-9bbd-001592afbf88": Phase="Pending", Reason="", readiness=false. Elapsed: 9.693156ms
Jan  1 13:14:36.129: INFO: Pod "pod-secrets-5dc558a3-d1e8-46c7-9bbd-001592afbf88": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019564222s
Jan  1 13:14:38.136: INFO: Pod "pod-secrets-5dc558a3-d1e8-46c7-9bbd-001592afbf88": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026876859s
Jan  1 13:14:40.142: INFO: Pod "pod-secrets-5dc558a3-d1e8-46c7-9bbd-001592afbf88": Phase="Pending", Reason="", readiness=false. Elapsed: 6.032156938s
Jan  1 13:14:42.156: INFO: Pod "pod-secrets-5dc558a3-d1e8-46c7-9bbd-001592afbf88": Phase="Pending", Reason="", readiness=false. Elapsed: 8.046883904s
Jan  1 13:14:44.167: INFO: Pod "pod-secrets-5dc558a3-d1e8-46c7-9bbd-001592afbf88": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.057850863s
STEP: Saw pod success
Jan  1 13:14:44.168: INFO: Pod "pod-secrets-5dc558a3-d1e8-46c7-9bbd-001592afbf88" satisfied condition "success or failure"
Jan  1 13:14:44.189: INFO: Trying to get logs from node iruya-node pod pod-secrets-5dc558a3-d1e8-46c7-9bbd-001592afbf88 container secret-volume-test: 
STEP: delete the pod
Jan  1 13:14:44.346: INFO: Waiting for pod pod-secrets-5dc558a3-d1e8-46c7-9bbd-001592afbf88 to disappear
Jan  1 13:14:44.357: INFO: Pod pod-secrets-5dc558a3-d1e8-46c7-9bbd-001592afbf88 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 13:14:44.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8054" for this suite.
Jan  1 13:14:50.495: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 13:14:50.658: INFO: namespace secrets-8054 deletion completed in 6.290569057s
STEP: Destroying namespace "secret-namespace-5094" for this suite.
Jan  1 13:14:56.749: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 13:14:56.897: INFO: namespace secret-namespace-5094 deletion completed in 6.238783626s

• [SLOW TEST:23.138 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 13:14:56.898: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on node default medium
Jan  1 13:14:57.077: INFO: Waiting up to 5m0s for pod "pod-0b73c52e-2e12-49cb-8079-5fa295692d77" in namespace "emptydir-6234" to be "success or failure"
Jan  1 13:14:57.083: INFO: Pod "pod-0b73c52e-2e12-49cb-8079-5fa295692d77": Phase="Pending", Reason="", readiness=false. Elapsed: 5.698474ms
Jan  1 13:14:59.097: INFO: Pod "pod-0b73c52e-2e12-49cb-8079-5fa295692d77": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019833659s
Jan  1 13:15:01.111: INFO: Pod "pod-0b73c52e-2e12-49cb-8079-5fa295692d77": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033682263s
Jan  1 13:15:03.118: INFO: Pod "pod-0b73c52e-2e12-49cb-8079-5fa295692d77": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041190272s
Jan  1 13:15:05.132: INFO: Pod "pod-0b73c52e-2e12-49cb-8079-5fa295692d77": Phase="Pending", Reason="", readiness=false. Elapsed: 8.054389114s
Jan  1 13:15:07.140: INFO: Pod "pod-0b73c52e-2e12-49cb-8079-5fa295692d77": Phase="Pending", Reason="", readiness=false. Elapsed: 10.06275617s
Jan  1 13:15:09.150: INFO: Pod "pod-0b73c52e-2e12-49cb-8079-5fa295692d77": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.07281785s
STEP: Saw pod success
Jan  1 13:15:09.150: INFO: Pod "pod-0b73c52e-2e12-49cb-8079-5fa295692d77" satisfied condition "success or failure"
Jan  1 13:15:09.155: INFO: Trying to get logs from node iruya-node pod pod-0b73c52e-2e12-49cb-8079-5fa295692d77 container test-container: 
STEP: delete the pod
Jan  1 13:15:09.460: INFO: Waiting for pod pod-0b73c52e-2e12-49cb-8079-5fa295692d77 to disappear
Jan  1 13:15:09.472: INFO: Pod pod-0b73c52e-2e12-49cb-8079-5fa295692d77 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 13:15:09.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6234" for this suite.
Jan  1 13:15:15.505: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 13:15:15.673: INFO: namespace emptydir-6234 deletion completed in 6.191502956s

• [SLOW TEST:18.776 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 13:15:15.675: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-upd-e5ed4eea-2a1f-4bc8-8a6d-22474e67dac9
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-e5ed4eea-2a1f-4bc8-8a6d-22474e67dac9
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 13:16:55.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1487" for this suite.
Jan  1 13:17:18.024: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 13:17:18.242: INFO: namespace configmap-1487 deletion completed in 22.256107188s

• [SLOW TEST:122.567 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 13:17:18.243: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan  1 13:17:18.359: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Jan  1 13:17:18.394: INFO: Number of nodes with available pods: 0
Jan  1 13:17:18.394: INFO: Node iruya-node is running more than one daemon pod
Jan  1 13:17:19.873: INFO: Number of nodes with available pods: 0
Jan  1 13:17:19.873: INFO: Node iruya-node is running more than one daemon pod
Jan  1 13:17:20.476: INFO: Number of nodes with available pods: 0
Jan  1 13:17:20.477: INFO: Node iruya-node is running more than one daemon pod
Jan  1 13:17:21.427: INFO: Number of nodes with available pods: 0
Jan  1 13:17:21.427: INFO: Node iruya-node is running more than one daemon pod
Jan  1 13:17:22.414: INFO: Number of nodes with available pods: 0
Jan  1 13:17:22.414: INFO: Node iruya-node is running more than one daemon pod
Jan  1 13:17:23.403: INFO: Number of nodes with available pods: 0
Jan  1 13:17:23.403: INFO: Node iruya-node is running more than one daemon pod
Jan  1 13:17:24.703: INFO: Number of nodes with available pods: 0
Jan  1 13:17:24.704: INFO: Node iruya-node is running more than one daemon pod
Jan  1 13:17:25.510: INFO: Number of nodes with available pods: 0
Jan  1 13:17:25.510: INFO: Node iruya-node is running more than one daemon pod
Jan  1 13:17:26.413: INFO: Number of nodes with available pods: 0
Jan  1 13:17:26.413: INFO: Node iruya-node is running more than one daemon pod
Jan  1 13:17:27.465: INFO: Number of nodes with available pods: 1
Jan  1 13:17:27.465: INFO: Node iruya-node is running more than one daemon pod
Jan  1 13:17:28.433: INFO: Number of nodes with available pods: 1
Jan  1 13:17:28.433: INFO: Node iruya-node is running more than one daemon pod
Jan  1 13:17:29.411: INFO: Number of nodes with available pods: 2
Jan  1 13:17:29.411: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Jan  1 13:17:29.475: INFO: Wrong image for pod: daemon-set-br6c7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  1 13:17:29.475: INFO: Wrong image for pod: daemon-set-skmr6. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  1 13:17:30.497: INFO: Wrong image for pod: daemon-set-br6c7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  1 13:17:30.497: INFO: Wrong image for pod: daemon-set-skmr6. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  1 13:17:31.497: INFO: Wrong image for pod: daemon-set-br6c7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  1 13:17:31.498: INFO: Wrong image for pod: daemon-set-skmr6. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  1 13:17:32.494: INFO: Wrong image for pod: daemon-set-br6c7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  1 13:17:32.494: INFO: Wrong image for pod: daemon-set-skmr6. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  1 13:17:33.493: INFO: Wrong image for pod: daemon-set-br6c7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  1 13:17:33.493: INFO: Wrong image for pod: daemon-set-skmr6. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  1 13:17:33.493: INFO: Pod daemon-set-skmr6 is not available
Jan  1 13:17:34.573: INFO: Wrong image for pod: daemon-set-br6c7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  1 13:17:34.573: INFO: Pod daemon-set-gx9dg is not available
Jan  1 13:17:36.191: INFO: Wrong image for pod: daemon-set-br6c7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  1 13:17:36.191: INFO: Pod daemon-set-gx9dg is not available
Jan  1 13:17:36.561: INFO: Wrong image for pod: daemon-set-br6c7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  1 13:17:36.561: INFO: Pod daemon-set-gx9dg is not available
Jan  1 13:17:37.493: INFO: Wrong image for pod: daemon-set-br6c7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  1 13:17:37.493: INFO: Pod daemon-set-gx9dg is not available
Jan  1 13:17:38.495: INFO: Wrong image for pod: daemon-set-br6c7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  1 13:17:38.495: INFO: Pod daemon-set-gx9dg is not available
Jan  1 13:17:39.794: INFO: Wrong image for pod: daemon-set-br6c7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  1 13:17:39.795: INFO: Pod daemon-set-gx9dg is not available
Jan  1 13:17:40.495: INFO: Wrong image for pod: daemon-set-br6c7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  1 13:17:40.495: INFO: Pod daemon-set-gx9dg is not available
Jan  1 13:17:41.488: INFO: Wrong image for pod: daemon-set-br6c7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  1 13:17:41.488: INFO: Pod daemon-set-gx9dg is not available
Jan  1 13:17:42.495: INFO: Wrong image for pod: daemon-set-br6c7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  1 13:17:42.495: INFO: Pod daemon-set-gx9dg is not available
Jan  1 13:17:43.487: INFO: Wrong image for pod: daemon-set-br6c7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  1 13:17:44.497: INFO: Wrong image for pod: daemon-set-br6c7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  1 13:17:45.493: INFO: Wrong image for pod: daemon-set-br6c7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  1 13:17:46.497: INFO: Wrong image for pod: daemon-set-br6c7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  1 13:17:47.491: INFO: Wrong image for pod: daemon-set-br6c7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  1 13:17:47.491: INFO: Pod daemon-set-br6c7 is not available
Jan  1 13:17:48.496: INFO: Wrong image for pod: daemon-set-br6c7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  1 13:17:48.496: INFO: Pod daemon-set-br6c7 is not available
Jan  1 13:17:49.493: INFO: Wrong image for pod: daemon-set-br6c7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  1 13:17:49.493: INFO: Pod daemon-set-br6c7 is not available
Jan  1 13:17:50.497: INFO: Wrong image for pod: daemon-set-br6c7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  1 13:17:50.497: INFO: Pod daemon-set-br6c7 is not available
Jan  1 13:17:51.495: INFO: Wrong image for pod: daemon-set-br6c7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  1 13:17:51.496: INFO: Pod daemon-set-br6c7 is not available
Jan  1 13:17:52.492: INFO: Wrong image for pod: daemon-set-br6c7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  1 13:17:52.493: INFO: Pod daemon-set-br6c7 is not available
Jan  1 13:17:53.490: INFO: Wrong image for pod: daemon-set-br6c7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  1 13:17:53.490: INFO: Pod daemon-set-br6c7 is not available
Jan  1 13:17:54.498: INFO: Wrong image for pod: daemon-set-br6c7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  1 13:17:54.499: INFO: Pod daemon-set-br6c7 is not available
Jan  1 13:17:55.491: INFO: Wrong image for pod: daemon-set-br6c7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  1 13:17:55.491: INFO: Pod daemon-set-br6c7 is not available
Jan  1 13:17:56.496: INFO: Wrong image for pod: daemon-set-br6c7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  1 13:17:56.496: INFO: Pod daemon-set-br6c7 is not available
Jan  1 13:17:57.491: INFO: Pod daemon-set-8mtn7 is not available
STEP: Check that daemon pods are still running on every node of the cluster.
Jan  1 13:17:57.508: INFO: Number of nodes with available pods: 1
Jan  1 13:17:57.508: INFO: Node iruya-node is running more than one daemon pod
Jan  1 13:17:58.530: INFO: Number of nodes with available pods: 1
Jan  1 13:17:58.530: INFO: Node iruya-node is running more than one daemon pod
Jan  1 13:17:59.528: INFO: Number of nodes with available pods: 1
Jan  1 13:17:59.528: INFO: Node iruya-node is running more than one daemon pod
Jan  1 13:18:00.524: INFO: Number of nodes with available pods: 1
Jan  1 13:18:00.524: INFO: Node iruya-node is running more than one daemon pod
Jan  1 13:18:01.527: INFO: Number of nodes with available pods: 1
Jan  1 13:18:01.527: INFO: Node iruya-node is running more than one daemon pod
Jan  1 13:18:02.533: INFO: Number of nodes with available pods: 1
Jan  1 13:18:02.533: INFO: Node iruya-node is running more than one daemon pod
Jan  1 13:18:03.529: INFO: Number of nodes with available pods: 2
Jan  1 13:18:03.529: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3009, will wait for the garbage collector to delete the pods
Jan  1 13:18:03.620: INFO: Deleting DaemonSet.extensions daemon-set took: 14.731584ms
Jan  1 13:18:03.921: INFO: Terminating DaemonSet.extensions daemon-set pods took: 301.079794ms
Jan  1 13:18:16.641: INFO: Number of nodes with available pods: 0
Jan  1 13:18:16.642: INFO: Number of running nodes: 0, number of available pods: 0
Jan  1 13:18:16.660: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3009/daemonsets","resourceVersion":"18894671"},"items":null}

Jan  1 13:18:16.667: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3009/pods","resourceVersion":"18894671"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 13:18:16.685: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-3009" for this suite.
Jan  1 13:18:22.748: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 13:18:22.912: INFO: namespace daemonsets-3009 deletion completed in 6.223417922s

• [SLOW TEST:64.670 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 13:18:22.914: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Starting the proxy
Jan  1 13:18:22.988: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix989836195/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 13:18:23.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7538" for this suite.
Jan  1 13:18:29.158: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 13:18:29.291: INFO: namespace kubectl-7538 deletion completed in 6.180029445s

• [SLOW TEST:6.378 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support --unix-socket=/path  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 13:18:29.293: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Jan  1 13:18:29.397: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan  1 13:18:29.408: INFO: Waiting for terminating namespaces to be deleted...
Jan  1 13:18:29.411: INFO: 
Logging pods the kubelet thinks is on node iruya-node before test
Jan  1 13:18:29.439: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded)
Jan  1 13:18:29.440: INFO: 	Container weave ready: true, restart count 0
Jan  1 13:18:29.440: INFO: 	Container weave-npc ready: true, restart count 0
Jan  1 13:18:29.440: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded)
Jan  1 13:18:29.440: INFO: 	Container kube-proxy ready: true, restart count 0
Jan  1 13:18:29.440: INFO: 
Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test
Jan  1 13:18:29.454: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded)
Jan  1 13:18:29.454: INFO: 	Container kube-controller-manager ready: true, restart count 14
Jan  1 13:18:29.454: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded)
Jan  1 13:18:29.454: INFO: 	Container kube-proxy ready: true, restart count 0
Jan  1 13:18:29.454: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded)
Jan  1 13:18:29.454: INFO: 	Container kube-apiserver ready: true, restart count 0
Jan  1 13:18:29.454: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded)
Jan  1 13:18:29.454: INFO: 	Container kube-scheduler ready: true, restart count 10
Jan  1 13:18:29.454: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Jan  1 13:18:29.454: INFO: 	Container coredns ready: true, restart count 0
Jan  1 13:18:29.454: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded)
Jan  1 13:18:29.454: INFO: 	Container etcd ready: true, restart count 0
Jan  1 13:18:29.454: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded)
Jan  1 13:18:29.454: INFO: 	Container weave ready: true, restart count 0
Jan  1 13:18:29.454: INFO: 	Container weave-npc ready: true, restart count 0
Jan  1 13:18:29.454: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Jan  1 13:18:29.454: INFO: 	Container coredns ready: true, restart count 0
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: verifying the node has the label node iruya-node
STEP: verifying the node has the label node iruya-server-sfge57q7djm7
Jan  1 13:18:29.584: INFO: Pod coredns-5c98db65d4-bm4gs requesting resource cpu=100m on Node iruya-server-sfge57q7djm7
Jan  1 13:18:29.584: INFO: Pod coredns-5c98db65d4-xx8w8 requesting resource cpu=100m on Node iruya-server-sfge57q7djm7
Jan  1 13:18:29.584: INFO: Pod etcd-iruya-server-sfge57q7djm7 requesting resource cpu=0m on Node iruya-server-sfge57q7djm7
Jan  1 13:18:29.584: INFO: Pod kube-apiserver-iruya-server-sfge57q7djm7 requesting resource cpu=250m on Node iruya-server-sfge57q7djm7
Jan  1 13:18:29.584: INFO: Pod kube-controller-manager-iruya-server-sfge57q7djm7 requesting resource cpu=200m on Node iruya-server-sfge57q7djm7
Jan  1 13:18:29.584: INFO: Pod kube-proxy-58v95 requesting resource cpu=0m on Node iruya-server-sfge57q7djm7
Jan  1 13:18:29.584: INFO: Pod kube-proxy-976zl requesting resource cpu=0m on Node iruya-node
Jan  1 13:18:29.584: INFO: Pod kube-scheduler-iruya-server-sfge57q7djm7 requesting resource cpu=100m on Node iruya-server-sfge57q7djm7
Jan  1 13:18:29.584: INFO: Pod weave-net-bzl4d requesting resource cpu=20m on Node iruya-server-sfge57q7djm7
Jan  1 13:18:29.584: INFO: Pod weave-net-rlp57 requesting resource cpu=20m on Node iruya-node
STEP: Starting Pods to consume most of the cluster CPU.
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-1f837f10-d142-49de-b68c-84848d8ab3df.15e5c5c890e84b63], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2820/filler-pod-1f837f10-d142-49de-b68c-84848d8ab3df to iruya-server-sfge57q7djm7]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-1f837f10-d142-49de-b68c-84848d8ab3df.15e5c5c9d030eda8], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-1f837f10-d142-49de-b68c-84848d8ab3df.15e5c5cab67ea25d], Reason = [Created], Message = [Created container filler-pod-1f837f10-d142-49de-b68c-84848d8ab3df]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-1f837f10-d142-49de-b68c-84848d8ab3df.15e5c5cae4118ab0], Reason = [Started], Message = [Started container filler-pod-1f837f10-d142-49de-b68c-84848d8ab3df]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-b9c81c86-dc46-4f37-a66c-ff4b43acb030.15e5c5c88ff61003], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2820/filler-pod-b9c81c86-dc46-4f37-a66c-ff4b43acb030 to iruya-node]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-b9c81c86-dc46-4f37-a66c-ff4b43acb030.15e5c5c9d58c0cd1], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-b9c81c86-dc46-4f37-a66c-ff4b43acb030.15e5c5cae3363251], Reason = [Created], Message = [Created container filler-pod-b9c81c86-dc46-4f37-a66c-ff4b43acb030]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-b9c81c86-dc46-4f37-a66c-ff4b43acb030.15e5c5cb034d828d], Reason = [Started], Message = [Started container filler-pod-b9c81c86-dc46-4f37-a66c-ff4b43acb030]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.15e5c5cb5eb33049], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 Insufficient cpu.]
STEP: removing the label node off the node iruya-node
STEP: verifying the node doesn't have the label node
STEP: removing the label node off the node iruya-server-sfge57q7djm7
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 13:18:42.932: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-2820" for this suite.
Jan  1 13:18:51.001: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 13:18:51.164: INFO: namespace sched-pred-2820 deletion completed in 8.219202753s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:21.871 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-api-machinery] Aggregator 
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 13:18:51.165: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename aggregator
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76
Jan  1 13:18:52.681: INFO: >>> kubeConfig: /root/.kube/config
[It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Registering the sample API server.
Jan  1 13:18:53.249: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set
Jan  1 13:18:55.537: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713481533, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713481533, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713481533, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713481533, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  1 13:18:57.548: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713481533, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713481533, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713481533, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713481533, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  1 13:18:59.550: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713481533, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713481533, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713481533, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713481533, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  1 13:19:01.550: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713481533, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713481533, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713481533, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713481533, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  1 13:19:03.547: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713481533, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713481533, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713481533, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713481533, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  1 13:19:10.629: INFO: Waited 5.062651052s for the sample-apiserver to be ready to handle requests.
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 13:19:11.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "aggregator-5852" for this suite.
Jan  1 13:19:17.285: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 13:19:17.487: INFO: namespace aggregator-5852 deletion completed in 6.232534744s

• [SLOW TEST:26.322 seconds]
[sig-api-machinery] Aggregator
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 13:19:17.488: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W0101 13:20:01.670802       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan  1 13:20:01.670: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 13:20:01.671: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-9257" for this suite.
Jan  1 13:20:10.967: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 13:20:12.646: INFO: namespace gc-9257 deletion completed in 10.968316471s

• [SLOW TEST:55.158 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 13:20:12.646: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Jan  1 13:20:39.380: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  1 13:20:39.403: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  1 13:20:41.404: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  1 13:20:41.416: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  1 13:20:43.403: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  1 13:20:43.417: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  1 13:20:45.404: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  1 13:20:45.422: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  1 13:20:47.404: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  1 13:20:47.441: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  1 13:20:49.403: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  1 13:20:49.412: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  1 13:20:51.404: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  1 13:20:51.413: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  1 13:20:53.403: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  1 13:20:53.412: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  1 13:20:55.403: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  1 13:20:55.413: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  1 13:20:57.404: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  1 13:20:57.418: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  1 13:20:59.404: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  1 13:20:59.423: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  1 13:21:01.404: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  1 13:21:01.413: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  1 13:21:03.403: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  1 13:21:03.414: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  1 13:21:05.404: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  1 13:21:05.413: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  1 13:21:07.404: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  1 13:21:07.417: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 13:21:07.458: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-9938" for this suite.
Jan  1 13:21:29.522: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 13:21:29.603: INFO: namespace container-lifecycle-hook-9938 deletion completed in 22.137336451s

• [SLOW TEST:76.957 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 13:21:29.603: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Jan  1 13:21:29.738: INFO: Number of nodes with available pods: 0
Jan  1 13:21:29.738: INFO: Node iruya-node is running more than one daemon pod
Jan  1 13:21:30.891: INFO: Number of nodes with available pods: 0
Jan  1 13:21:30.891: INFO: Node iruya-node is running more than one daemon pod
Jan  1 13:21:31.803: INFO: Number of nodes with available pods: 0
Jan  1 13:21:31.803: INFO: Node iruya-node is running more than one daemon pod
Jan  1 13:21:32.759: INFO: Number of nodes with available pods: 0
Jan  1 13:21:32.759: INFO: Node iruya-node is running more than one daemon pod
Jan  1 13:21:33.771: INFO: Number of nodes with available pods: 0
Jan  1 13:21:33.771: INFO: Node iruya-node is running more than one daemon pod
Jan  1 13:21:35.222: INFO: Number of nodes with available pods: 0
Jan  1 13:21:35.222: INFO: Node iruya-node is running more than one daemon pod
Jan  1 13:21:36.520: INFO: Number of nodes with available pods: 0
Jan  1 13:21:36.521: INFO: Node iruya-node is running more than one daemon pod
Jan  1 13:21:36.790: INFO: Number of nodes with available pods: 0
Jan  1 13:21:36.791: INFO: Node iruya-node is running more than one daemon pod
Jan  1 13:21:37.756: INFO: Number of nodes with available pods: 0
Jan  1 13:21:37.756: INFO: Node iruya-node is running more than one daemon pod
Jan  1 13:21:38.749: INFO: Number of nodes with available pods: 1
Jan  1 13:21:38.749: INFO: Node iruya-node is running more than one daemon pod
Jan  1 13:21:39.763: INFO: Number of nodes with available pods: 1
Jan  1 13:21:39.763: INFO: Node iruya-node is running more than one daemon pod
Jan  1 13:21:40.755: INFO: Number of nodes with available pods: 2
Jan  1 13:21:40.755: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Stop a daemon pod, check that the daemon pod is revived.
Jan  1 13:21:40.916: INFO: Number of nodes with available pods: 1
Jan  1 13:21:40.916: INFO: Node iruya-node is running more than one daemon pod
Jan  1 13:21:41.939: INFO: Number of nodes with available pods: 1
Jan  1 13:21:41.939: INFO: Node iruya-node is running more than one daemon pod
Jan  1 13:21:42.952: INFO: Number of nodes with available pods: 1
Jan  1 13:21:42.952: INFO: Node iruya-node is running more than one daemon pod
Jan  1 13:21:43.941: INFO: Number of nodes with available pods: 1
Jan  1 13:21:43.941: INFO: Node iruya-node is running more than one daemon pod
Jan  1 13:21:44.930: INFO: Number of nodes with available pods: 1
Jan  1 13:21:44.930: INFO: Node iruya-node is running more than one daemon pod
Jan  1 13:21:45.934: INFO: Number of nodes with available pods: 1
Jan  1 13:21:45.934: INFO: Node iruya-node is running more than one daemon pod
Jan  1 13:21:46.987: INFO: Number of nodes with available pods: 1
Jan  1 13:21:46.987: INFO: Node iruya-node is running more than one daemon pod
Jan  1 13:21:47.935: INFO: Number of nodes with available pods: 1
Jan  1 13:21:47.935: INFO: Node iruya-node is running more than one daemon pod
Jan  1 13:21:48.939: INFO: Number of nodes with available pods: 1
Jan  1 13:21:48.940: INFO: Node iruya-node is running more than one daemon pod
Jan  1 13:21:49.933: INFO: Number of nodes with available pods: 1
Jan  1 13:21:49.933: INFO: Node iruya-node is running more than one daemon pod
Jan  1 13:21:50.931: INFO: Number of nodes with available pods: 1
Jan  1 13:21:50.932: INFO: Node iruya-node is running more than one daemon pod
Jan  1 13:21:51.963: INFO: Number of nodes with available pods: 1
Jan  1 13:21:51.963: INFO: Node iruya-node is running more than one daemon pod
Jan  1 13:21:52.928: INFO: Number of nodes with available pods: 1
Jan  1 13:21:52.928: INFO: Node iruya-node is running more than one daemon pod
Jan  1 13:21:53.934: INFO: Number of nodes with available pods: 1
Jan  1 13:21:53.934: INFO: Node iruya-node is running more than one daemon pod
Jan  1 13:21:54.934: INFO: Number of nodes with available pods: 1
Jan  1 13:21:54.934: INFO: Node iruya-node is running more than one daemon pod
Jan  1 13:21:55.930: INFO: Number of nodes with available pods: 1
Jan  1 13:21:55.930: INFO: Node iruya-node is running more than one daemon pod
Jan  1 13:21:56.933: INFO: Number of nodes with available pods: 1
Jan  1 13:21:56.933: INFO: Node iruya-node is running more than one daemon pod
Jan  1 13:21:57.931: INFO: Number of nodes with available pods: 1
Jan  1 13:21:57.931: INFO: Node iruya-node is running more than one daemon pod
Jan  1 13:21:58.929: INFO: Number of nodes with available pods: 1
Jan  1 13:21:58.929: INFO: Node iruya-node is running more than one daemon pod
Jan  1 13:21:59.928: INFO: Number of nodes with available pods: 1
Jan  1 13:21:59.928: INFO: Node iruya-node is running more than one daemon pod
Jan  1 13:22:00.934: INFO: Number of nodes with available pods: 1
Jan  1 13:22:00.935: INFO: Node iruya-node is running more than one daemon pod
Jan  1 13:22:01.941: INFO: Number of nodes with available pods: 1
Jan  1 13:22:01.942: INFO: Node iruya-node is running more than one daemon pod
Jan  1 13:22:02.930: INFO: Number of nodes with available pods: 1
Jan  1 13:22:02.930: INFO: Node iruya-node is running more than one daemon pod
Jan  1 13:22:03.942: INFO: Number of nodes with available pods: 1
Jan  1 13:22:03.942: INFO: Node iruya-node is running more than one daemon pod
Jan  1 13:22:04.933: INFO: Number of nodes with available pods: 1
Jan  1 13:22:04.933: INFO: Node iruya-node is running more than one daemon pod
Jan  1 13:22:05.931: INFO: Number of nodes with available pods: 2
Jan  1 13:22:05.931: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1276, will wait for the garbage collector to delete the pods
Jan  1 13:22:06.011: INFO: Deleting DaemonSet.extensions daemon-set took: 21.369207ms
Jan  1 13:22:07.412: INFO: Terminating DaemonSet.extensions daemon-set pods took: 1.400942829s
Jan  1 13:22:16.623: INFO: Number of nodes with available pods: 0
Jan  1 13:22:16.623: INFO: Number of running nodes: 0, number of available pods: 0
Jan  1 13:22:16.629: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1276/daemonsets","resourceVersion":"18895426"},"items":null}

Jan  1 13:22:16.632: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1276/pods","resourceVersion":"18895426"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 13:22:16.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-1276" for this suite.
Jan  1 13:22:22.670: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 13:22:22.922: INFO: namespace daemonsets-1276 deletion completed in 6.272686728s

• [SLOW TEST:53.319 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 13:22:22.922: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-39cc4b80-7f9f-4d36-89ca-1c719a28b696
STEP: Creating a pod to test consume configMaps
Jan  1 13:22:23.038: INFO: Waiting up to 5m0s for pod "pod-configmaps-00df0e8e-6740-4f15-b4ef-3d9e1d8f64ad" in namespace "configmap-5021" to be "success or failure"
Jan  1 13:22:23.053: INFO: Pod "pod-configmaps-00df0e8e-6740-4f15-b4ef-3d9e1d8f64ad": Phase="Pending", Reason="", readiness=false. Elapsed: 14.2351ms
Jan  1 13:22:25.060: INFO: Pod "pod-configmaps-00df0e8e-6740-4f15-b4ef-3d9e1d8f64ad": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021483051s
Jan  1 13:22:27.085: INFO: Pod "pod-configmaps-00df0e8e-6740-4f15-b4ef-3d9e1d8f64ad": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046215523s
Jan  1 13:22:29.093: INFO: Pod "pod-configmaps-00df0e8e-6740-4f15-b4ef-3d9e1d8f64ad": Phase="Pending", Reason="", readiness=false. Elapsed: 6.054036584s
Jan  1 13:22:31.102: INFO: Pod "pod-configmaps-00df0e8e-6740-4f15-b4ef-3d9e1d8f64ad": Phase="Pending", Reason="", readiness=false. Elapsed: 8.06305872s
Jan  1 13:22:33.111: INFO: Pod "pod-configmaps-00df0e8e-6740-4f15-b4ef-3d9e1d8f64ad": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.072390399s
STEP: Saw pod success
Jan  1 13:22:33.111: INFO: Pod "pod-configmaps-00df0e8e-6740-4f15-b4ef-3d9e1d8f64ad" satisfied condition "success or failure"
Jan  1 13:22:33.116: INFO: Trying to get logs from node iruya-node pod pod-configmaps-00df0e8e-6740-4f15-b4ef-3d9e1d8f64ad container configmap-volume-test: 
STEP: delete the pod
Jan  1 13:22:33.244: INFO: Waiting for pod pod-configmaps-00df0e8e-6740-4f15-b4ef-3d9e1d8f64ad to disappear
Jan  1 13:22:33.284: INFO: Pod pod-configmaps-00df0e8e-6740-4f15-b4ef-3d9e1d8f64ad no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 13:22:33.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5021" for this suite.
Jan  1 13:22:39.403: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 13:22:39.583: INFO: namespace configmap-5021 deletion completed in 6.208730428s

• [SLOW TEST:16.661 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 13:22:39.584: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Jan  1 13:22:39.653: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan  1 13:22:39.661: INFO: Waiting for terminating namespaces to be deleted...
Jan  1 13:22:39.664: INFO: 
Logging pods the kubelet thinks is on node iruya-node before test
Jan  1 13:22:39.735: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded)
Jan  1 13:22:39.735: INFO: 	Container kube-proxy ready: true, restart count 0
Jan  1 13:22:39.735: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded)
Jan  1 13:22:39.735: INFO: 	Container weave ready: true, restart count 0
Jan  1 13:22:39.735: INFO: 	Container weave-npc ready: true, restart count 0
Jan  1 13:22:39.735: INFO: 
Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test
Jan  1 13:22:39.754: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded)
Jan  1 13:22:39.754: INFO: 	Container kube-controller-manager ready: true, restart count 14
Jan  1 13:22:39.754: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded)
Jan  1 13:22:39.754: INFO: 	Container kube-proxy ready: true, restart count 0
Jan  1 13:22:39.754: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded)
Jan  1 13:22:39.754: INFO: 	Container kube-apiserver ready: true, restart count 0
Jan  1 13:22:39.754: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded)
Jan  1 13:22:39.754: INFO: 	Container kube-scheduler ready: true, restart count 10
Jan  1 13:22:39.754: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Jan  1 13:22:39.754: INFO: 	Container coredns ready: true, restart count 0
Jan  1 13:22:39.754: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded)
Jan  1 13:22:39.754: INFO: 	Container etcd ready: true, restart count 0
Jan  1 13:22:39.754: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded)
Jan  1 13:22:39.754: INFO: 	Container weave ready: true, restart count 0
Jan  1 13:22:39.754: INFO: 	Container weave-npc ready: true, restart count 0
Jan  1 13:22:39.754: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Jan  1 13:22:39.754: INFO: 	Container coredns ready: true, restart count 0
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.15e5c602cd616e5a], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 13:22:40.807: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-5707" for this suite.
Jan  1 13:22:46.834: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 13:22:46.956: INFO: namespace sched-pred-5707 deletion completed in 6.145742678s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:7.372 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 13:22:46.957: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Jan  1 13:25:47.207: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  1 13:25:47.275: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  1 13:25:49.275: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  1 13:25:49.287: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  1 13:25:51.275: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  1 13:25:51.286: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  1 13:25:53.276: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  1 13:25:53.289: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  1 13:25:55.275: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  1 13:25:55.286: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  1 13:25:57.276: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  1 13:25:57.285: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  1 13:25:59.275: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  1 13:25:59.296: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  1 13:26:01.275: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  1 13:26:01.289: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  1 13:26:03.276: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  1 13:26:03.593: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  1 13:26:05.276: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  1 13:26:05.285: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  1 13:26:07.276: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  1 13:26:07.289: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  1 13:26:09.275: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  1 13:26:09.306: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  1 13:26:11.276: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  1 13:26:11.285: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  1 13:26:13.276: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  1 13:26:13.284: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  1 13:26:15.276: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  1 13:26:15.289: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  1 13:26:17.276: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  1 13:26:17.286: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  1 13:26:19.275: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  1 13:26:19.285: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  1 13:26:21.275: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  1 13:26:21.285: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  1 13:26:23.275: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  1 13:26:23.282: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  1 13:26:25.275: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  1 13:26:25.285: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  1 13:26:27.276: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  1 13:26:27.287: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  1 13:26:29.276: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  1 13:26:29.290: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  1 13:26:31.276: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  1 13:26:31.289: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  1 13:26:33.275: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  1 13:26:33.284: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  1 13:26:35.276: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  1 13:26:35.282: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  1 13:26:37.276: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  1 13:26:37.295: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  1 13:26:39.276: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  1 13:26:39.287: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  1 13:26:41.276: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  1 13:26:41.289: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  1 13:26:43.276: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  1 13:26:43.293: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  1 13:26:45.275: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  1 13:26:45.284: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  1 13:26:47.276: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  1 13:26:47.285: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  1 13:26:49.276: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  1 13:26:49.293: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  1 13:26:51.276: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  1 13:26:51.285: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  1 13:26:53.275: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  1 13:26:53.283: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  1 13:26:55.276: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  1 13:26:55.287: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  1 13:26:57.276: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  1 13:26:57.287: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  1 13:26:59.276: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  1 13:26:59.321: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  1 13:27:01.276: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  1 13:27:01.287: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  1 13:27:03.275: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  1 13:27:03.285: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  1 13:27:05.276: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  1 13:27:05.288: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  1 13:27:07.275: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  1 13:27:07.285: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  1 13:27:09.276: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  1 13:27:09.284: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  1 13:27:11.276: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  1 13:27:11.287: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  1 13:27:13.275: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  1 13:27:13.288: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  1 13:27:15.276: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  1 13:27:15.285: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  1 13:27:17.275: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  1 13:27:17.292: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  1 13:27:19.276: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  1 13:27:19.329: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  1 13:27:21.276: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  1 13:27:21.287: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  1 13:27:23.275: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  1 13:27:23.287: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  1 13:27:25.276: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  1 13:27:25.284: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  1 13:27:27.275: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  1 13:27:27.283: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 13:27:27.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-6704" for this suite.
Jan  1 13:27:49.335: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 13:27:49.532: INFO: namespace container-lifecycle-hook-6704 deletion completed in 22.242390398s

• [SLOW TEST:302.575 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 13:27:49.534: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 13:28:49.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-5124" for this suite.
Jan  1 13:29:11.723: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 13:29:11.847: INFO: namespace container-probe-5124 deletion completed in 22.19789877s

• [SLOW TEST:82.314 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 13:29:11.848: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with downward pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-downwardapi-d6zz
STEP: Creating a pod to test atomic-volume-subpath
Jan  1 13:29:12.093: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-d6zz" in namespace "subpath-5094" to be "success or failure"
Jan  1 13:29:12.113: INFO: Pod "pod-subpath-test-downwardapi-d6zz": Phase="Pending", Reason="", readiness=false. Elapsed: 19.76884ms
Jan  1 13:29:14.139: INFO: Pod "pod-subpath-test-downwardapi-d6zz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045960548s
Jan  1 13:29:16.148: INFO: Pod "pod-subpath-test-downwardapi-d6zz": Phase="Pending", Reason="", readiness=false. Elapsed: 4.054947865s
Jan  1 13:29:18.156: INFO: Pod "pod-subpath-test-downwardapi-d6zz": Phase="Pending", Reason="", readiness=false. Elapsed: 6.062112226s
Jan  1 13:29:20.208: INFO: Pod "pod-subpath-test-downwardapi-d6zz": Phase="Pending", Reason="", readiness=false. Elapsed: 8.114915885s
Jan  1 13:29:22.217: INFO: Pod "pod-subpath-test-downwardapi-d6zz": Phase="Running", Reason="", readiness=true. Elapsed: 10.123146027s
Jan  1 13:29:24.225: INFO: Pod "pod-subpath-test-downwardapi-d6zz": Phase="Running", Reason="", readiness=true. Elapsed: 12.131207424s
Jan  1 13:29:26.233: INFO: Pod "pod-subpath-test-downwardapi-d6zz": Phase="Running", Reason="", readiness=true. Elapsed: 14.139536828s
Jan  1 13:29:28.243: INFO: Pod "pod-subpath-test-downwardapi-d6zz": Phase="Running", Reason="", readiness=true. Elapsed: 16.149303617s
Jan  1 13:29:30.257: INFO: Pod "pod-subpath-test-downwardapi-d6zz": Phase="Running", Reason="", readiness=true. Elapsed: 18.163216198s
Jan  1 13:29:32.265: INFO: Pod "pod-subpath-test-downwardapi-d6zz": Phase="Running", Reason="", readiness=true. Elapsed: 20.171165178s
Jan  1 13:29:34.271: INFO: Pod "pod-subpath-test-downwardapi-d6zz": Phase="Running", Reason="", readiness=true. Elapsed: 22.177210473s
Jan  1 13:29:36.281: INFO: Pod "pod-subpath-test-downwardapi-d6zz": Phase="Running", Reason="", readiness=true. Elapsed: 24.18712191s
Jan  1 13:29:38.288: INFO: Pod "pod-subpath-test-downwardapi-d6zz": Phase="Running", Reason="", readiness=true. Elapsed: 26.194947466s
Jan  1 13:29:40.297: INFO: Pod "pod-subpath-test-downwardapi-d6zz": Phase="Running", Reason="", readiness=true. Elapsed: 28.203225793s
Jan  1 13:29:42.577: INFO: Pod "pod-subpath-test-downwardapi-d6zz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.483347113s
STEP: Saw pod success
Jan  1 13:29:42.577: INFO: Pod "pod-subpath-test-downwardapi-d6zz" satisfied condition "success or failure"
Jan  1 13:29:42.585: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-downwardapi-d6zz container test-container-subpath-downwardapi-d6zz: 
STEP: delete the pod
Jan  1 13:29:42.740: INFO: Waiting for pod pod-subpath-test-downwardapi-d6zz to disappear
Jan  1 13:29:42.756: INFO: Pod pod-subpath-test-downwardapi-d6zz no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-d6zz
Jan  1 13:29:42.756: INFO: Deleting pod "pod-subpath-test-downwardapi-d6zz" in namespace "subpath-5094"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 13:29:42.758: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-5094" for this suite.
Jan  1 13:29:48.806: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 13:29:48.968: INFO: namespace subpath-5094 deletion completed in 6.205821418s

• [SLOW TEST:37.120 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with downward pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 13:29:48.969: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan  1 13:29:49.123: INFO: Waiting up to 5m0s for pod "downwardapi-volume-dab5672d-eb7d-47c0-9230-e03908a5890f" in namespace "downward-api-9649" to be "success or failure"
Jan  1 13:29:49.136: INFO: Pod "downwardapi-volume-dab5672d-eb7d-47c0-9230-e03908a5890f": Phase="Pending", Reason="", readiness=false. Elapsed: 12.268757ms
Jan  1 13:29:51.149: INFO: Pod "downwardapi-volume-dab5672d-eb7d-47c0-9230-e03908a5890f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02578792s
Jan  1 13:29:53.207: INFO: Pod "downwardapi-volume-dab5672d-eb7d-47c0-9230-e03908a5890f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.083262752s
Jan  1 13:29:55.215: INFO: Pod "downwardapi-volume-dab5672d-eb7d-47c0-9230-e03908a5890f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.0915586s
Jan  1 13:29:57.226: INFO: Pod "downwardapi-volume-dab5672d-eb7d-47c0-9230-e03908a5890f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.102140197s
Jan  1 13:29:59.237: INFO: Pod "downwardapi-volume-dab5672d-eb7d-47c0-9230-e03908a5890f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.113305967s
STEP: Saw pod success
Jan  1 13:29:59.237: INFO: Pod "downwardapi-volume-dab5672d-eb7d-47c0-9230-e03908a5890f" satisfied condition "success or failure"
Jan  1 13:29:59.242: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-dab5672d-eb7d-47c0-9230-e03908a5890f container client-container: 
STEP: delete the pod
Jan  1 13:29:59.304: INFO: Waiting for pod downwardapi-volume-dab5672d-eb7d-47c0-9230-e03908a5890f to disappear
Jan  1 13:29:59.310: INFO: Pod downwardapi-volume-dab5672d-eb7d-47c0-9230-e03908a5890f no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 13:29:59.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9649" for this suite.
Jan  1 13:30:05.391: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 13:30:05.552: INFO: namespace downward-api-9649 deletion completed in 6.235401624s

• [SLOW TEST:16.584 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-apps] ReplicationController 
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 13:30:05.553: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan  1 13:30:05.643: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace
STEP: Creating rc "condition-test" that asks for more than the allowed pod quota
STEP: Checking rc "condition-test" has the desired failure condition set
STEP: Scaling down rc "condition-test" to satisfy pod quota
Jan  1 13:30:09.362: INFO: Updating replication controller "condition-test"
STEP: Checking rc "condition-test" has no failure condition set
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 13:30:09.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-7574" for this suite.
Jan  1 13:30:17.421: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 13:30:18.159: INFO: namespace replication-controller-7574 deletion completed in 8.761405159s

• [SLOW TEST:12.606 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 13:30:18.160: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan  1 13:30:19.197: INFO: Create a RollingUpdate DaemonSet
Jan  1 13:30:19.226: INFO: Check that daemon pods launch on every node of the cluster
Jan  1 13:30:19.247: INFO: Number of nodes with available pods: 0
Jan  1 13:30:19.247: INFO: Node iruya-node is running more than one daemon pod
Jan  1 13:30:20.555: INFO: Number of nodes with available pods: 0
Jan  1 13:30:20.555: INFO: Node iruya-node is running more than one daemon pod
Jan  1 13:30:21.261: INFO: Number of nodes with available pods: 0
Jan  1 13:30:21.261: INFO: Node iruya-node is running more than one daemon pod
Jan  1 13:30:22.369: INFO: Number of nodes with available pods: 0
Jan  1 13:30:22.369: INFO: Node iruya-node is running more than one daemon pod
Jan  1 13:30:23.267: INFO: Number of nodes with available pods: 0
Jan  1 13:30:23.267: INFO: Node iruya-node is running more than one daemon pod
Jan  1 13:30:24.668: INFO: Number of nodes with available pods: 0
Jan  1 13:30:24.669: INFO: Node iruya-node is running more than one daemon pod
Jan  1 13:30:25.995: INFO: Number of nodes with available pods: 0
Jan  1 13:30:25.995: INFO: Node iruya-node is running more than one daemon pod
Jan  1 13:30:26.430: INFO: Number of nodes with available pods: 0
Jan  1 13:30:26.430: INFO: Node iruya-node is running more than one daemon pod
Jan  1 13:30:27.265: INFO: Number of nodes with available pods: 0
Jan  1 13:30:27.265: INFO: Node iruya-node is running more than one daemon pod
Jan  1 13:30:28.283: INFO: Number of nodes with available pods: 0
Jan  1 13:30:28.283: INFO: Node iruya-node is running more than one daemon pod
Jan  1 13:30:29.261: INFO: Number of nodes with available pods: 1
Jan  1 13:30:29.261: INFO: Node iruya-node is running more than one daemon pod
Jan  1 13:30:30.273: INFO: Number of nodes with available pods: 2
Jan  1 13:30:30.273: INFO: Number of running nodes: 2, number of available pods: 2
Jan  1 13:30:30.273: INFO: Update the DaemonSet to trigger a rollout
Jan  1 13:30:30.286: INFO: Updating DaemonSet daemon-set
Jan  1 13:30:36.335: INFO: Roll back the DaemonSet before rollout is complete
Jan  1 13:30:36.355: INFO: Updating DaemonSet daemon-set
Jan  1 13:30:36.355: INFO: Make sure DaemonSet rollback is complete
Jan  1 13:30:36.416: INFO: Wrong image for pod: daemon-set-9tqjf. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Jan  1 13:30:36.416: INFO: Pod daemon-set-9tqjf is not available
Jan  1 13:30:37.445: INFO: Wrong image for pod: daemon-set-9tqjf. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Jan  1 13:30:37.445: INFO: Pod daemon-set-9tqjf is not available
Jan  1 13:30:38.442: INFO: Wrong image for pod: daemon-set-9tqjf. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Jan  1 13:30:38.442: INFO: Pod daemon-set-9tqjf is not available
Jan  1 13:30:39.441: INFO: Wrong image for pod: daemon-set-9tqjf. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Jan  1 13:30:39.441: INFO: Pod daemon-set-9tqjf is not available
Jan  1 13:30:40.441: INFO: Wrong image for pod: daemon-set-9tqjf. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Jan  1 13:30:40.441: INFO: Pod daemon-set-9tqjf is not available
Jan  1 13:30:41.895: INFO: Wrong image for pod: daemon-set-9tqjf. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Jan  1 13:30:41.895: INFO: Pod daemon-set-9tqjf is not available
Jan  1 13:30:42.453: INFO: Pod daemon-set-rp7xj is not available
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8977, will wait for the garbage collector to delete the pods
Jan  1 13:30:42.716: INFO: Deleting DaemonSet.extensions daemon-set took: 10.928694ms
Jan  1 13:30:43.017: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.696336ms
Jan  1 13:30:56.629: INFO: Number of nodes with available pods: 0
Jan  1 13:30:56.629: INFO: Number of running nodes: 0, number of available pods: 0
Jan  1 13:30:56.636: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8977/daemonsets","resourceVersion":"18896444"},"items":null}

Jan  1 13:30:56.640: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8977/pods","resourceVersion":"18896444"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 13:30:56.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-8977" for this suite.
Jan  1 13:31:02.685: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 13:31:02.785: INFO: namespace daemonsets-8977 deletion completed in 6.125497914s

• [SLOW TEST:44.625 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 13:31:02.786: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:167
[It] should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating server pod server in namespace prestop-7438
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace prestop-7438
STEP: Deleting pre-stop pod
Jan  1 13:31:26.064: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 13:31:26.077: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "prestop-7438" for this suite.
Jan  1 13:32:06.110: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 13:32:06.326: INFO: namespace prestop-7438 deletion completed in 40.242370664s

• [SLOW TEST:63.541 seconds]
[k8s.io] [sig-node] PreStop
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 13:32:06.328: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on node default medium
Jan  1 13:32:06.472: INFO: Waiting up to 5m0s for pod "pod-a1e828a3-0f38-44c0-b354-d2fd450bde22" in namespace "emptydir-6984" to be "success or failure"
Jan  1 13:32:06.497: INFO: Pod "pod-a1e828a3-0f38-44c0-b354-d2fd450bde22": Phase="Pending", Reason="", readiness=false. Elapsed: 24.230999ms
Jan  1 13:32:08.509: INFO: Pod "pod-a1e828a3-0f38-44c0-b354-d2fd450bde22": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03656504s
Jan  1 13:32:10.520: INFO: Pod "pod-a1e828a3-0f38-44c0-b354-d2fd450bde22": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047545968s
Jan  1 13:32:12.539: INFO: Pod "pod-a1e828a3-0f38-44c0-b354-d2fd450bde22": Phase="Pending", Reason="", readiness=false. Elapsed: 6.066012281s
Jan  1 13:32:14.556: INFO: Pod "pod-a1e828a3-0f38-44c0-b354-d2fd450bde22": Phase="Running", Reason="", readiness=true. Elapsed: 8.082863753s
Jan  1 13:32:16.585: INFO: Pod "pod-a1e828a3-0f38-44c0-b354-d2fd450bde22": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.112294752s
STEP: Saw pod success
Jan  1 13:32:16.586: INFO: Pod "pod-a1e828a3-0f38-44c0-b354-d2fd450bde22" satisfied condition "success or failure"
Jan  1 13:32:16.600: INFO: Trying to get logs from node iruya-node pod pod-a1e828a3-0f38-44c0-b354-d2fd450bde22 container test-container: 
STEP: delete the pod
Jan  1 13:32:17.100: INFO: Waiting for pod pod-a1e828a3-0f38-44c0-b354-d2fd450bde22 to disappear
Jan  1 13:32:17.133: INFO: Pod pod-a1e828a3-0f38-44c0-b354-d2fd450bde22 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 13:32:17.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6984" for this suite.
Jan  1 13:32:23.188: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 13:32:23.314: INFO: namespace emptydir-6984 deletion completed in 6.173844621s

• [SLOW TEST:16.987 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 13:32:23.315: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan  1 13:32:23.495: INFO: Waiting up to 5m0s for pod "downwardapi-volume-554b6b0d-8d86-4155-8637-fba264b55575" in namespace "projected-2791" to be "success or failure"
Jan  1 13:32:23.565: INFO: Pod "downwardapi-volume-554b6b0d-8d86-4155-8637-fba264b55575": Phase="Pending", Reason="", readiness=false. Elapsed: 69.867814ms
Jan  1 13:32:25.577: INFO: Pod "downwardapi-volume-554b6b0d-8d86-4155-8637-fba264b55575": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0813328s
Jan  1 13:32:27.585: INFO: Pod "downwardapi-volume-554b6b0d-8d86-4155-8637-fba264b55575": Phase="Pending", Reason="", readiness=false. Elapsed: 4.090072765s
Jan  1 13:32:29.598: INFO: Pod "downwardapi-volume-554b6b0d-8d86-4155-8637-fba264b55575": Phase="Pending", Reason="", readiness=false. Elapsed: 6.103129971s
Jan  1 13:32:31.612: INFO: Pod "downwardapi-volume-554b6b0d-8d86-4155-8637-fba264b55575": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.116481858s
STEP: Saw pod success
Jan  1 13:32:31.612: INFO: Pod "downwardapi-volume-554b6b0d-8d86-4155-8637-fba264b55575" satisfied condition "success or failure"
Jan  1 13:32:31.615: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-554b6b0d-8d86-4155-8637-fba264b55575 container client-container: 
STEP: delete the pod
Jan  1 13:32:31.698: INFO: Waiting for pod downwardapi-volume-554b6b0d-8d86-4155-8637-fba264b55575 to disappear
Jan  1 13:32:31.706: INFO: Pod downwardapi-volume-554b6b0d-8d86-4155-8637-fba264b55575 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 13:32:31.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2791" for this suite.
Jan  1 13:32:37.735: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 13:32:37.936: INFO: namespace projected-2791 deletion completed in 6.221617315s

• [SLOW TEST:14.621 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 13:32:37.937: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on node default medium
Jan  1 13:32:38.079: INFO: Waiting up to 5m0s for pod "pod-8e3ffc67-fc94-4c53-834b-4b1e40602fda" in namespace "emptydir-6203" to be "success or failure"
Jan  1 13:32:38.087: INFO: Pod "pod-8e3ffc67-fc94-4c53-834b-4b1e40602fda": Phase="Pending", Reason="", readiness=false. Elapsed: 8.432585ms
Jan  1 13:32:40.095: INFO: Pod "pod-8e3ffc67-fc94-4c53-834b-4b1e40602fda": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016314173s
Jan  1 13:32:42.149: INFO: Pod "pod-8e3ffc67-fc94-4c53-834b-4b1e40602fda": Phase="Pending", Reason="", readiness=false. Elapsed: 4.070041217s
Jan  1 13:32:44.209: INFO: Pod "pod-8e3ffc67-fc94-4c53-834b-4b1e40602fda": Phase="Pending", Reason="", readiness=false. Elapsed: 6.130364972s
Jan  1 13:32:46.219: INFO: Pod "pod-8e3ffc67-fc94-4c53-834b-4b1e40602fda": Phase="Pending", Reason="", readiness=false. Elapsed: 8.13997942s
Jan  1 13:32:48.228: INFO: Pod "pod-8e3ffc67-fc94-4c53-834b-4b1e40602fda": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.149143706s
STEP: Saw pod success
Jan  1 13:32:48.228: INFO: Pod "pod-8e3ffc67-fc94-4c53-834b-4b1e40602fda" satisfied condition "success or failure"
Jan  1 13:32:48.234: INFO: Trying to get logs from node iruya-node pod pod-8e3ffc67-fc94-4c53-834b-4b1e40602fda container test-container: 
STEP: delete the pod
Jan  1 13:32:48.298: INFO: Waiting for pod pod-8e3ffc67-fc94-4c53-834b-4b1e40602fda to disappear
Jan  1 13:32:48.313: INFO: Pod pod-8e3ffc67-fc94-4c53-834b-4b1e40602fda no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 13:32:48.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6203" for this suite.
Jan  1 13:32:54.377: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 13:32:54.558: INFO: namespace emptydir-6203 deletion completed in 6.234842371s

• [SLOW TEST:16.622 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 13:32:54.560: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-9369ca23-e386-4741-b20a-f47bbe37cdd7
STEP: Creating a pod to test consume configMaps
Jan  1 13:32:54.833: INFO: Waiting up to 5m0s for pod "pod-configmaps-fe9ec0bc-4609-4757-a07a-cbe548ad31c6" in namespace "configmap-3956" to be "success or failure"
Jan  1 13:32:54.863: INFO: Pod "pod-configmaps-fe9ec0bc-4609-4757-a07a-cbe548ad31c6": Phase="Pending", Reason="", readiness=false. Elapsed: 29.537316ms
Jan  1 13:32:56.875: INFO: Pod "pod-configmaps-fe9ec0bc-4609-4757-a07a-cbe548ad31c6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04161193s
Jan  1 13:32:58.887: INFO: Pod "pod-configmaps-fe9ec0bc-4609-4757-a07a-cbe548ad31c6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.054309947s
Jan  1 13:33:00.895: INFO: Pod "pod-configmaps-fe9ec0bc-4609-4757-a07a-cbe548ad31c6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.062283589s
Jan  1 13:33:02.913: INFO: Pod "pod-configmaps-fe9ec0bc-4609-4757-a07a-cbe548ad31c6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.079730451s
Jan  1 13:33:04.961: INFO: Pod "pod-configmaps-fe9ec0bc-4609-4757-a07a-cbe548ad31c6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.128169462s
STEP: Saw pod success
Jan  1 13:33:04.961: INFO: Pod "pod-configmaps-fe9ec0bc-4609-4757-a07a-cbe548ad31c6" satisfied condition "success or failure"
Jan  1 13:33:04.967: INFO: Trying to get logs from node iruya-node pod pod-configmaps-fe9ec0bc-4609-4757-a07a-cbe548ad31c6 container configmap-volume-test: 
STEP: delete the pod
Jan  1 13:33:05.669: INFO: Waiting for pod pod-configmaps-fe9ec0bc-4609-4757-a07a-cbe548ad31c6 to disappear
Jan  1 13:33:05.674: INFO: Pod pod-configmaps-fe9ec0bc-4609-4757-a07a-cbe548ad31c6 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 13:33:05.674: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3956" for this suite.
Jan  1 13:33:11.713: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 13:33:11.881: INFO: namespace configmap-3956 deletion completed in 6.198332782s

• [SLOW TEST:17.322 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 13:33:11.882: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name s-test-opt-del-f29939a3-ee9d-444e-b3da-2fc3a41e0643
STEP: Creating secret with name s-test-opt-upd-ad907e1c-8c21-46c1-81d5-a03331cdc731
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-f29939a3-ee9d-444e-b3da-2fc3a41e0643
STEP: Updating secret s-test-opt-upd-ad907e1c-8c21-46c1-81d5-a03331cdc731
STEP: Creating secret with name s-test-opt-create-f17c9424-acec-423d-bc5a-7fc1ffd8eca3
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 13:33:26.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3949" for this suite.
Jan  1 13:33:48.368: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 13:33:48.532: INFO: namespace projected-3949 deletion completed in 22.197847078s

• [SLOW TEST:36.651 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 13:33:48.533: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Jan  1 13:33:57.876: INFO: Expected: &{OK} to match Container's Termination Message: OK --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 13:33:57.933: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-9407" for this suite.
Jan  1 13:34:04.023: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 13:34:04.166: INFO: namespace container-runtime-9407 deletion completed in 6.228288775s

• [SLOW TEST:15.633 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 13:34:04.168: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan  1 13:34:04.322: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4754'
Jan  1 13:34:06.451: INFO: stderr: ""
Jan  1 13:34:06.451: INFO: stdout: "replicationcontroller/redis-master created\n"
Jan  1 13:34:06.452: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4754'
Jan  1 13:34:07.082: INFO: stderr: ""
Jan  1 13:34:07.082: INFO: stdout: "service/redis-master created\n"
STEP: Waiting for Redis master to start.
Jan  1 13:34:08.095: INFO: Selector matched 1 pods for map[app:redis]
Jan  1 13:34:08.095: INFO: Found 0 / 1
Jan  1 13:34:09.089: INFO: Selector matched 1 pods for map[app:redis]
Jan  1 13:34:09.089: INFO: Found 0 / 1
Jan  1 13:34:10.105: INFO: Selector matched 1 pods for map[app:redis]
Jan  1 13:34:10.105: INFO: Found 0 / 1
Jan  1 13:34:11.143: INFO: Selector matched 1 pods for map[app:redis]
Jan  1 13:34:11.144: INFO: Found 0 / 1
Jan  1 13:34:12.097: INFO: Selector matched 1 pods for map[app:redis]
Jan  1 13:34:12.097: INFO: Found 0 / 1
Jan  1 13:34:13.205: INFO: Selector matched 1 pods for map[app:redis]
Jan  1 13:34:13.205: INFO: Found 0 / 1
Jan  1 13:34:14.098: INFO: Selector matched 1 pods for map[app:redis]
Jan  1 13:34:14.098: INFO: Found 0 / 1
Jan  1 13:34:15.097: INFO: Selector matched 1 pods for map[app:redis]
Jan  1 13:34:15.097: INFO: Found 0 / 1
Jan  1 13:34:16.088: INFO: Selector matched 1 pods for map[app:redis]
Jan  1 13:34:16.088: INFO: Found 1 / 1
Jan  1 13:34:16.088: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Jan  1 13:34:16.091: INFO: Selector matched 1 pods for map[app:redis]
Jan  1 13:34:16.091: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jan  1 13:34:16.091: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-gt69v --namespace=kubectl-4754'
Jan  1 13:34:16.279: INFO: stderr: ""
Jan  1 13:34:16.279: INFO: stdout: "Name:           redis-master-gt69v\nNamespace:      kubectl-4754\nPriority:       0\nNode:           iruya-node/10.96.3.65\nStart Time:     Wed, 01 Jan 2020 13:34:06 +0000\nLabels:         app=redis\n                role=master\nAnnotations:    \nStatus:         Running\nIP:             10.44.0.1\nControlled By:  ReplicationController/redis-master\nContainers:\n  redis-master:\n    Container ID:   docker://4037ebfb3519539ed23bc95aff1b9ed8b01e112c0e543f92245ccda3af9ad2a1\n    Image:          gcr.io/kubernetes-e2e-test-images/redis:1.0\n    Image ID:       docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Wed, 01 Jan 2020 13:34:14 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-9qlrq (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-9qlrq:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-9qlrq\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason     Age   From                 Message\n  ----    ------     ----  ----                 -------\n  Normal  Scheduled  10s   default-scheduler    Successfully assigned kubectl-4754/redis-master-gt69v to iruya-node\n  Normal  Pulled     6s    kubelet, iruya-node  Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n  Normal  Created    3s    kubelet, iruya-node  Created container redis-master\n  Normal  Started    2s    kubelet, iruya-node  Started container redis-master\n"
Jan  1 13:34:16.280: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=kubectl-4754'
Jan  1 13:34:16.438: INFO: stderr: ""
Jan  1 13:34:16.438: INFO: stdout: "Name:         redis-master\nNamespace:    kubectl-4754\nSelector:     app=redis,role=master\nLabels:       app=redis\n              role=master\nAnnotations:  \nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=redis\n           role=master\n  Containers:\n   redis-master:\n    Image:        gcr.io/kubernetes-e2e-test-images/redis:1.0\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  \n    Mounts:       \n  Volumes:        \nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  10s   replication-controller  Created pod: redis-master-gt69v\n"
Jan  1 13:34:16.439: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=kubectl-4754'
Jan  1 13:34:16.577: INFO: stderr: ""
Jan  1 13:34:16.577: INFO: stdout: "Name:              redis-master\nNamespace:         kubectl-4754\nLabels:            app=redis\n                   role=master\nAnnotations:       \nSelector:          app=redis,role=master\nType:              ClusterIP\nIP:                10.107.200.163\nPort:                6379/TCP\nTargetPort:        redis-server/TCP\nEndpoints:         10.44.0.1:6379\nSession Affinity:  None\nEvents:            \n"
Jan  1 13:34:16.592: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node iruya-node'
Jan  1 13:34:16.787: INFO: stderr: ""
Jan  1 13:34:16.787: INFO: stdout: "Name:               iruya-node\nRoles:              \nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=iruya-node\n                    kubernetes.io/os=linux\nAnnotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Sun, 04 Aug 2019 09:01:39 +0000\nTaints:             \nUnschedulable:      false\nConditions:\n  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----                 ------  -----------------                 ------------------                ------                       -------\n  NetworkUnavailable   False   Sat, 12 Oct 2019 11:56:49 +0000   Sat, 12 Oct 2019 11:56:49 +0000   WeaveIsUp                    Weave pod has set this\n  MemoryPressure       False   Wed, 01 Jan 2020 13:33:59 +0000   Sun, 04 Aug 2019 09:01:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure         False   Wed, 01 Jan 2020 13:33:59 +0000   Sun, 04 Aug 2019 09:01:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure          False   Wed, 01 Jan 2020 13:33:59 +0000   Sun, 04 Aug 2019 09:01:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready                True    Wed, 01 Jan 2020 13:33:59 +0000   Sun, 04 Aug 2019 09:02:19 +0000   KubeletReady                 kubelet is posting ready status. AppArmor enabled\nAddresses:\n  InternalIP:  10.96.3.65\n  Hostname:    iruya-node\nCapacity:\n cpu:                4\n ephemeral-storage:  20145724Ki\n hugepages-2Mi:      0\n memory:             4039076Ki\n pods:               110\nAllocatable:\n cpu:                4\n ephemeral-storage:  18566299208\n hugepages-2Mi:      0\n memory:             3936676Ki\n pods:               110\nSystem Info:\n Machine ID:                 f573dcf04d6f4a87856a35d266a2fa7a\n System UUID:                F573DCF0-4D6F-4A87-856A-35D266A2FA7A\n Boot ID:                    8baf4beb-8391-43e6-b17b-b1e184b5370a\n Kernel Version:             4.15.0-52-generic\n OS Image:                   Ubuntu 18.04.2 LTS\n Operating System:           linux\n Architecture:               amd64\n Container Runtime Version:  docker://18.9.7\n Kubelet Version:            v1.15.1\n Kube-Proxy Version:         v1.15.1\nPodCIDR:                     10.96.1.0/24\nNon-terminated Pods:         (3 in total)\n  Namespace                  Name                  CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                  ----                  ------------  ----------  ---------------  -------------  ---\n  kube-system                kube-proxy-976zl      0 (0%)        0 (0%)      0 (0%)           0 (0%)         150d\n  kube-system                weave-net-rlp57       20m (0%)      0 (0%)      0 (0%)           0 (0%)         81d\n  kubectl-4754               redis-master-gt69v    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests  Limits\n  --------           --------  ------\n  cpu                20m (0%)  0 (0%)\n  memory             0 (0%)    0 (0%)\n  ephemeral-storage  0 (0%)    0 (0%)\nEvents:              \n"
Jan  1 13:34:16.788: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-4754'
Jan  1 13:34:16.975: INFO: stderr: ""
Jan  1 13:34:16.976: INFO: stdout: "Name:         kubectl-4754\nLabels:       e2e-framework=kubectl\n              e2e-run=87e7bcb4-9d7a-4846-b361-e187da49d16d\nAnnotations:  \nStatus:       Active\n\nNo resource quota.\n\nNo resource limits.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 13:34:16.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4754" for this suite.
Jan  1 13:34:39.007: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 13:34:39.109: INFO: namespace kubectl-4754 deletion completed in 22.125699886s

• [SLOW TEST:34.941 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl describe
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if kubectl describe prints relevant information for rc and pods  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 13:34:39.110: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan  1 13:34:39.285: INFO: Creating ReplicaSet my-hostname-basic-a2d19285-6acf-41a4-a28f-1f1687835316
Jan  1 13:34:39.309: INFO: Pod name my-hostname-basic-a2d19285-6acf-41a4-a28f-1f1687835316: Found 0 pods out of 1
Jan  1 13:34:44.329: INFO: Pod name my-hostname-basic-a2d19285-6acf-41a4-a28f-1f1687835316: Found 1 pods out of 1
Jan  1 13:34:44.329: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-a2d19285-6acf-41a4-a28f-1f1687835316" is running
Jan  1 13:34:48.352: INFO: Pod "my-hostname-basic-a2d19285-6acf-41a4-a28f-1f1687835316-lmfn5" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-01 13:34:39 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-01 13:34:39 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-a2d19285-6acf-41a4-a28f-1f1687835316]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-01 13:34:39 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-a2d19285-6acf-41a4-a28f-1f1687835316]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-01 13:34:39 +0000 UTC Reason: Message:}])
Jan  1 13:34:48.352: INFO: Trying to dial the pod
Jan  1 13:34:53.405: INFO: Controller my-hostname-basic-a2d19285-6acf-41a4-a28f-1f1687835316: Got expected result from replica 1 [my-hostname-basic-a2d19285-6acf-41a4-a28f-1f1687835316-lmfn5]: "my-hostname-basic-a2d19285-6acf-41a4-a28f-1f1687835316-lmfn5", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 13:34:53.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-999" for this suite.
Jan  1 13:34:59.440: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 13:34:59.635: INFO: namespace replicaset-999 deletion completed in 6.22293132s

• [SLOW TEST:20.525 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 13:34:59.637: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
Jan  1 13:34:59.712: INFO: observed the pod list
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 13:35:26.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-2118" for this suite.
Jan  1 13:35:32.702: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 13:35:32.826: INFO: namespace pods-2118 deletion completed in 6.139700818s

• [SLOW TEST:33.189 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 13:35:32.828: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-b4b95846-1d87-43a3-9530-e83271efc90b
STEP: Creating a pod to test consume configMaps
Jan  1 13:35:33.164: INFO: Waiting up to 5m0s for pod "pod-configmaps-6beb17e1-0ad1-40ea-ab29-03fa9705091b" in namespace "configmap-7588" to be "success or failure"
Jan  1 13:35:33.186: INFO: Pod "pod-configmaps-6beb17e1-0ad1-40ea-ab29-03fa9705091b": Phase="Pending", Reason="", readiness=false. Elapsed: 21.193547ms
Jan  1 13:35:35.196: INFO: Pod "pod-configmaps-6beb17e1-0ad1-40ea-ab29-03fa9705091b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032010541s
Jan  1 13:35:37.204: INFO: Pod "pod-configmaps-6beb17e1-0ad1-40ea-ab29-03fa9705091b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040021538s
Jan  1 13:35:39.212: INFO: Pod "pod-configmaps-6beb17e1-0ad1-40ea-ab29-03fa9705091b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.0471795s
Jan  1 13:35:41.218: INFO: Pod "pod-configmaps-6beb17e1-0ad1-40ea-ab29-03fa9705091b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.053799357s
Jan  1 13:35:43.225: INFO: Pod "pod-configmaps-6beb17e1-0ad1-40ea-ab29-03fa9705091b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.060750408s
STEP: Saw pod success
Jan  1 13:35:43.225: INFO: Pod "pod-configmaps-6beb17e1-0ad1-40ea-ab29-03fa9705091b" satisfied condition "success or failure"
Jan  1 13:35:43.229: INFO: Trying to get logs from node iruya-node pod pod-configmaps-6beb17e1-0ad1-40ea-ab29-03fa9705091b container configmap-volume-test: 
STEP: delete the pod
Jan  1 13:35:43.268: INFO: Waiting for pod pod-configmaps-6beb17e1-0ad1-40ea-ab29-03fa9705091b to disappear
Jan  1 13:35:43.273: INFO: Pod pod-configmaps-6beb17e1-0ad1-40ea-ab29-03fa9705091b no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 13:35:43.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7588" for this suite.
Jan  1 13:35:49.374: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 13:35:49.533: INFO: namespace configmap-7588 deletion completed in 6.254761616s

• [SLOW TEST:16.706 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 13:35:49.534: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating service multi-endpoint-test in namespace services-1734
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1734 to expose endpoints map[]
Jan  1 13:35:49.707: INFO: successfully validated that service multi-endpoint-test in namespace services-1734 exposes endpoints map[] (13.962783ms elapsed)
STEP: Creating pod pod1 in namespace services-1734
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1734 to expose endpoints map[pod1:[100]]
Jan  1 13:35:53.819: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.08961828s elapsed, will retry)
Jan  1 13:35:56.867: INFO: successfully validated that service multi-endpoint-test in namespace services-1734 exposes endpoints map[pod1:[100]] (7.13757285s elapsed)
STEP: Creating pod pod2 in namespace services-1734
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1734 to expose endpoints map[pod1:[100] pod2:[101]]
Jan  1 13:36:01.782: INFO: Unexpected endpoints: found map[98ff49f9-78c5-4851-ad06-fcd8480677f4:[100]], expected map[pod1:[100] pod2:[101]] (4.905594062s elapsed, will retry)
Jan  1 13:36:05.038: INFO: successfully validated that service multi-endpoint-test in namespace services-1734 exposes endpoints map[pod1:[100] pod2:[101]] (8.161457573s elapsed)
STEP: Deleting pod pod1 in namespace services-1734
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1734 to expose endpoints map[pod2:[101]]
Jan  1 13:36:06.218: INFO: successfully validated that service multi-endpoint-test in namespace services-1734 exposes endpoints map[pod2:[101]] (1.161534241s elapsed)
STEP: Deleting pod pod2 in namespace services-1734
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1734 to expose endpoints map[]
Jan  1 13:36:07.518: INFO: successfully validated that service multi-endpoint-test in namespace services-1734 exposes endpoints map[] (1.29438639s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 13:36:07.949: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-1734" for this suite.
Jan  1 13:36:29.981: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 13:36:30.088: INFO: namespace services-1734 deletion completed in 22.132110195s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:40.555 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 13:36:30.090: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test substitution in container's command
Jan  1 13:36:30.248: INFO: Waiting up to 5m0s for pod "var-expansion-fdc94116-6a0a-470d-a9e1-3722489766b3" in namespace "var-expansion-1076" to be "success or failure"
Jan  1 13:36:30.285: INFO: Pod "var-expansion-fdc94116-6a0a-470d-a9e1-3722489766b3": Phase="Pending", Reason="", readiness=false. Elapsed: 36.121645ms
Jan  1 13:36:32.311: INFO: Pod "var-expansion-fdc94116-6a0a-470d-a9e1-3722489766b3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062351412s
Jan  1 13:36:34.315: INFO: Pod "var-expansion-fdc94116-6a0a-470d-a9e1-3722489766b3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.066119158s
Jan  1 13:36:36.331: INFO: Pod "var-expansion-fdc94116-6a0a-470d-a9e1-3722489766b3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.081887267s
Jan  1 13:36:38.345: INFO: Pod "var-expansion-fdc94116-6a0a-470d-a9e1-3722489766b3": Phase="Pending", Reason="", readiness=false. Elapsed: 8.096603511s
Jan  1 13:36:41.388: INFO: Pod "var-expansion-fdc94116-6a0a-470d-a9e1-3722489766b3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.138887946s
STEP: Saw pod success
Jan  1 13:36:41.388: INFO: Pod "var-expansion-fdc94116-6a0a-470d-a9e1-3722489766b3" satisfied condition "success or failure"
Jan  1 13:36:41.397: INFO: Trying to get logs from node iruya-node pod var-expansion-fdc94116-6a0a-470d-a9e1-3722489766b3 container dapi-container: 
STEP: delete the pod
Jan  1 13:36:41.739: INFO: Waiting for pod var-expansion-fdc94116-6a0a-470d-a9e1-3722489766b3 to disappear
Jan  1 13:36:41.751: INFO: Pod var-expansion-fdc94116-6a0a-470d-a9e1-3722489766b3 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 13:36:41.752: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-1076" for this suite.
Jan  1 13:36:47.839: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 13:36:48.084: INFO: namespace var-expansion-1076 deletion completed in 6.323451475s

• [SLOW TEST:17.993 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 13:36:48.084: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-upd-45b9aae8-d854-4a68-a45d-4427ddcc1458
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 13:37:00.520: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3613" for this suite.
Jan  1 13:37:22.613: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 13:37:22.718: INFO: namespace configmap-3613 deletion completed in 22.191241692s

• [SLOW TEST:34.634 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 13:37:22.718: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan  1 13:37:22.927: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Jan  1 13:37:22.944: INFO: Number of nodes with available pods: 0
Jan  1 13:37:22.944: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Jan  1 13:37:23.116: INFO: Number of nodes with available pods: 0
Jan  1 13:37:23.117: INFO: Node iruya-node is running more than one daemon pod
Jan  1 13:37:24.160: INFO: Number of nodes with available pods: 0
Jan  1 13:37:24.160: INFO: Node iruya-node is running more than one daemon pod
Jan  1 13:37:25.168: INFO: Number of nodes with available pods: 0
Jan  1 13:37:25.169: INFO: Node iruya-node is running more than one daemon pod
Jan  1 13:37:26.124: INFO: Number of nodes with available pods: 0
Jan  1 13:37:26.124: INFO: Node iruya-node is running more than one daemon pod
Jan  1 13:37:27.126: INFO: Number of nodes with available pods: 0
Jan  1 13:37:27.126: INFO: Node iruya-node is running more than one daemon pod
Jan  1 13:37:28.123: INFO: Number of nodes with available pods: 0
Jan  1 13:37:28.123: INFO: Node iruya-node is running more than one daemon pod
Jan  1 13:37:29.124: INFO: Number of nodes with available pods: 0
Jan  1 13:37:29.124: INFO: Node iruya-node is running more than one daemon pod
Jan  1 13:37:30.129: INFO: Number of nodes with available pods: 0
Jan  1 13:37:30.129: INFO: Node iruya-node is running more than one daemon pod
Jan  1 13:37:31.127: INFO: Number of nodes with available pods: 1
Jan  1 13:37:31.127: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Jan  1 13:37:31.211: INFO: Number of nodes with available pods: 1
Jan  1 13:37:31.211: INFO: Number of running nodes: 0, number of available pods: 1
Jan  1 13:37:32.221: INFO: Number of nodes with available pods: 0
Jan  1 13:37:32.221: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Jan  1 13:37:32.234: INFO: Number of nodes with available pods: 0
Jan  1 13:37:32.234: INFO: Node iruya-node is running more than one daemon pod
Jan  1 13:37:33.242: INFO: Number of nodes with available pods: 0
Jan  1 13:37:33.242: INFO: Node iruya-node is running more than one daemon pod
Jan  1 13:37:34.244: INFO: Number of nodes with available pods: 0
Jan  1 13:37:34.244: INFO: Node iruya-node is running more than one daemon pod
Jan  1 13:37:35.249: INFO: Number of nodes with available pods: 0
Jan  1 13:37:35.250: INFO: Node iruya-node is running more than one daemon pod
Jan  1 13:37:36.244: INFO: Number of nodes with available pods: 0
Jan  1 13:37:36.244: INFO: Node iruya-node is running more than one daemon pod
Jan  1 13:37:37.252: INFO: Number of nodes with available pods: 0
Jan  1 13:37:37.252: INFO: Node iruya-node is running more than one daemon pod
Jan  1 13:37:38.250: INFO: Number of nodes with available pods: 0
Jan  1 13:37:38.250: INFO: Node iruya-node is running more than one daemon pod
Jan  1 13:37:39.252: INFO: Number of nodes with available pods: 0
Jan  1 13:37:39.253: INFO: Node iruya-node is running more than one daemon pod
Jan  1 13:37:40.244: INFO: Number of nodes with available pods: 0
Jan  1 13:37:40.244: INFO: Node iruya-node is running more than one daemon pod
Jan  1 13:37:41.243: INFO: Number of nodes with available pods: 0
Jan  1 13:37:41.243: INFO: Node iruya-node is running more than one daemon pod
Jan  1 13:37:42.243: INFO: Number of nodes with available pods: 0
Jan  1 13:37:42.243: INFO: Node iruya-node is running more than one daemon pod
Jan  1 13:37:43.247: INFO: Number of nodes with available pods: 0
Jan  1 13:37:43.247: INFO: Node iruya-node is running more than one daemon pod
Jan  1 13:37:44.246: INFO: Number of nodes with available pods: 0
Jan  1 13:37:44.246: INFO: Node iruya-node is running more than one daemon pod
Jan  1 13:37:45.242: INFO: Number of nodes with available pods: 0
Jan  1 13:37:45.242: INFO: Node iruya-node is running more than one daemon pod
Jan  1 13:37:46.318: INFO: Number of nodes with available pods: 1
Jan  1 13:37:46.318: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6082, will wait for the garbage collector to delete the pods
Jan  1 13:37:46.405: INFO: Deleting DaemonSet.extensions daemon-set took: 21.721298ms
Jan  1 13:37:46.705: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.530666ms
Jan  1 13:37:56.622: INFO: Number of nodes with available pods: 0
Jan  1 13:37:56.622: INFO: Number of running nodes: 0, number of available pods: 0
Jan  1 13:37:56.631: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6082/daemonsets","resourceVersion":"18897522"},"items":null}

Jan  1 13:37:56.639: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6082/pods","resourceVersion":"18897522"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 13:37:56.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-6082" for this suite.
Jan  1 13:38:02.785: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 13:38:02.921: INFO: namespace daemonsets-6082 deletion completed in 6.174742213s

• [SLOW TEST:40.202 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 13:38:02.921: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W0101 13:38:17.501996       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan  1 13:38:17.502: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 13:38:17.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-9803" for this suite.
Jan  1 13:38:34.155: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 13:38:34.261: INFO: namespace gc-9803 deletion completed in 16.350602125s

• [SLOW TEST:31.340 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run job 
  should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 13:38:34.262: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1612
[It] should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan  1 13:38:34.406: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-9281'
Jan  1 13:38:34.517: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan  1 13:38:34.517: INFO: stdout: "job.batch/e2e-test-nginx-job created\n"
STEP: verifying the job e2e-test-nginx-job was created
[AfterEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1617
Jan  1 13:38:34.568: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=kubectl-9281'
Jan  1 13:38:34.790: INFO: stderr: ""
Jan  1 13:38:34.790: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 13:38:34.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9281" for this suite.
Jan  1 13:38:40.876: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 13:38:40.987: INFO: namespace kubectl-9281 deletion completed in 6.189551093s

• [SLOW TEST:6.725 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a job from an image when restart is OnFailure  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 13:38:40.988: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on node default medium
Jan  1 13:38:41.135: INFO: Waiting up to 5m0s for pod "pod-44f9bead-21f7-4fc0-9cda-9158711e5ce1" in namespace "emptydir-9026" to be "success or failure"
Jan  1 13:38:41.149: INFO: Pod "pod-44f9bead-21f7-4fc0-9cda-9158711e5ce1": Phase="Pending", Reason="", readiness=false. Elapsed: 13.598462ms
Jan  1 13:38:43.178: INFO: Pod "pod-44f9bead-21f7-4fc0-9cda-9158711e5ce1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042723754s
Jan  1 13:38:45.190: INFO: Pod "pod-44f9bead-21f7-4fc0-9cda-9158711e5ce1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.054171399s
Jan  1 13:38:47.202: INFO: Pod "pod-44f9bead-21f7-4fc0-9cda-9158711e5ce1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.066490662s
Jan  1 13:38:49.217: INFO: Pod "pod-44f9bead-21f7-4fc0-9cda-9158711e5ce1": Phase="Pending", Reason="", readiness=false. Elapsed: 8.082088316s
Jan  1 13:38:51.227: INFO: Pod "pod-44f9bead-21f7-4fc0-9cda-9158711e5ce1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.091495062s
STEP: Saw pod success
Jan  1 13:38:51.227: INFO: Pod "pod-44f9bead-21f7-4fc0-9cda-9158711e5ce1" satisfied condition "success or failure"
Jan  1 13:38:51.232: INFO: Trying to get logs from node iruya-node pod pod-44f9bead-21f7-4fc0-9cda-9158711e5ce1 container test-container: 
STEP: delete the pod
Jan  1 13:38:51.300: INFO: Waiting for pod pod-44f9bead-21f7-4fc0-9cda-9158711e5ce1 to disappear
Jan  1 13:38:51.405: INFO: Pod pod-44f9bead-21f7-4fc0-9cda-9158711e5ce1 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 13:38:51.405: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9026" for this suite.
Jan  1 13:38:57.449: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 13:38:57.599: INFO: namespace emptydir-9026 deletion completed in 6.177184965s

• [SLOW TEST:16.611 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 13:38:57.600: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Jan  1 13:38:57.669: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-1792,SelfLink:/api/v1/namespaces/watch-1792/configmaps/e2e-watch-test-configmap-a,UID:1aa1b27d-183e-4a52-b058-69e790d99560,ResourceVersion:18897799,Generation:0,CreationTimestamp:2020-01-01 13:38:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan  1 13:38:57.669: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-1792,SelfLink:/api/v1/namespaces/watch-1792/configmaps/e2e-watch-test-configmap-a,UID:1aa1b27d-183e-4a52-b058-69e790d99560,ResourceVersion:18897799,Generation:0,CreationTimestamp:2020-01-01 13:38:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Jan  1 13:39:07.686: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-1792,SelfLink:/api/v1/namespaces/watch-1792/configmaps/e2e-watch-test-configmap-a,UID:1aa1b27d-183e-4a52-b058-69e790d99560,ResourceVersion:18897814,Generation:0,CreationTimestamp:2020-01-01 13:38:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Jan  1 13:39:07.686: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-1792,SelfLink:/api/v1/namespaces/watch-1792/configmaps/e2e-watch-test-configmap-a,UID:1aa1b27d-183e-4a52-b058-69e790d99560,ResourceVersion:18897814,Generation:0,CreationTimestamp:2020-01-01 13:38:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Jan  1 13:39:17.701: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-1792,SelfLink:/api/v1/namespaces/watch-1792/configmaps/e2e-watch-test-configmap-a,UID:1aa1b27d-183e-4a52-b058-69e790d99560,ResourceVersion:18897829,Generation:0,CreationTimestamp:2020-01-01 13:38:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan  1 13:39:17.701: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-1792,SelfLink:/api/v1/namespaces/watch-1792/configmaps/e2e-watch-test-configmap-a,UID:1aa1b27d-183e-4a52-b058-69e790d99560,ResourceVersion:18897829,Generation:0,CreationTimestamp:2020-01-01 13:38:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Jan  1 13:39:27.723: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-1792,SelfLink:/api/v1/namespaces/watch-1792/configmaps/e2e-watch-test-configmap-a,UID:1aa1b27d-183e-4a52-b058-69e790d99560,ResourceVersion:18897843,Generation:0,CreationTimestamp:2020-01-01 13:38:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan  1 13:39:27.724: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-1792,SelfLink:/api/v1/namespaces/watch-1792/configmaps/e2e-watch-test-configmap-a,UID:1aa1b27d-183e-4a52-b058-69e790d99560,ResourceVersion:18897843,Generation:0,CreationTimestamp:2020-01-01 13:38:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Jan  1 13:39:37.743: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-1792,SelfLink:/api/v1/namespaces/watch-1792/configmaps/e2e-watch-test-configmap-b,UID:ab5fec71-7cac-445a-9dfa-3689823d967a,ResourceVersion:18897856,Generation:0,CreationTimestamp:2020-01-01 13:39:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan  1 13:39:37.743: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-1792,SelfLink:/api/v1/namespaces/watch-1792/configmaps/e2e-watch-test-configmap-b,UID:ab5fec71-7cac-445a-9dfa-3689823d967a,ResourceVersion:18897856,Generation:0,CreationTimestamp:2020-01-01 13:39:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Jan  1 13:39:47.765: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-1792,SelfLink:/api/v1/namespaces/watch-1792/configmaps/e2e-watch-test-configmap-b,UID:ab5fec71-7cac-445a-9dfa-3689823d967a,ResourceVersion:18897871,Generation:0,CreationTimestamp:2020-01-01 13:39:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan  1 13:39:47.765: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-1792,SelfLink:/api/v1/namespaces/watch-1792/configmaps/e2e-watch-test-configmap-b,UID:ab5fec71-7cac-445a-9dfa-3689823d967a,ResourceVersion:18897871,Generation:0,CreationTimestamp:2020-01-01 13:39:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 13:39:57.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-1792" for this suite.
Jan  1 13:40:03.841: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 13:40:03.993: INFO: namespace watch-1792 deletion completed in 6.216510659s

• [SLOW TEST:66.393 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 13:40:03.994: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap configmap-1990/configmap-test-a4dad5c8-1f82-464b-a475-45e5cfedd74d
STEP: Creating a pod to test consume configMaps
Jan  1 13:40:04.269: INFO: Waiting up to 5m0s for pod "pod-configmaps-080fdcad-8a18-4646-a5ab-8fa05c8f10d4" in namespace "configmap-1990" to be "success or failure"
Jan  1 13:40:04.285: INFO: Pod "pod-configmaps-080fdcad-8a18-4646-a5ab-8fa05c8f10d4": Phase="Pending", Reason="", readiness=false. Elapsed: 16.036512ms
Jan  1 13:40:06.293: INFO: Pod "pod-configmaps-080fdcad-8a18-4646-a5ab-8fa05c8f10d4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023566674s
Jan  1 13:40:08.319: INFO: Pod "pod-configmaps-080fdcad-8a18-4646-a5ab-8fa05c8f10d4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050133945s
Jan  1 13:40:10.326: INFO: Pod "pod-configmaps-080fdcad-8a18-4646-a5ab-8fa05c8f10d4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.057153906s
Jan  1 13:40:12.331: INFO: Pod "pod-configmaps-080fdcad-8a18-4646-a5ab-8fa05c8f10d4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.062062437s
STEP: Saw pod success
Jan  1 13:40:12.331: INFO: Pod "pod-configmaps-080fdcad-8a18-4646-a5ab-8fa05c8f10d4" satisfied condition "success or failure"
Jan  1 13:40:12.334: INFO: Trying to get logs from node iruya-node pod pod-configmaps-080fdcad-8a18-4646-a5ab-8fa05c8f10d4 container env-test: 
STEP: delete the pod
Jan  1 13:40:12.437: INFO: Waiting for pod pod-configmaps-080fdcad-8a18-4646-a5ab-8fa05c8f10d4 to disappear
Jan  1 13:40:12.450: INFO: Pod pod-configmaps-080fdcad-8a18-4646-a5ab-8fa05c8f10d4 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 13:40:12.450: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1990" for this suite.
Jan  1 13:40:18.487: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 13:40:18.691: INFO: namespace configmap-1990 deletion completed in 6.229772082s

• [SLOW TEST:14.697 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 13:40:18.691: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs
STEP: Gathering metrics
W0101 13:40:49.309216       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan  1 13:40:49.309: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 13:40:49.309: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-3286" for this suite.
Jan  1 13:40:56.689: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 13:40:57.323: INFO: namespace gc-3286 deletion completed in 8.008237633s

• [SLOW TEST:38.632 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 13:40:57.325: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-9008f2ad-beac-49e9-bd90-a4fc95659655
STEP: Creating a pod to test consume configMaps
Jan  1 13:40:57.583: INFO: Waiting up to 5m0s for pod "pod-configmaps-6f3b1a68-b42d-4497-810e-d5bc6913be64" in namespace "configmap-7691" to be "success or failure"
Jan  1 13:40:57.593: INFO: Pod "pod-configmaps-6f3b1a68-b42d-4497-810e-d5bc6913be64": Phase="Pending", Reason="", readiness=false. Elapsed: 9.779222ms
Jan  1 13:40:59.676: INFO: Pod "pod-configmaps-6f3b1a68-b42d-4497-810e-d5bc6913be64": Phase="Pending", Reason="", readiness=false. Elapsed: 2.092988466s
Jan  1 13:41:01.682: INFO: Pod "pod-configmaps-6f3b1a68-b42d-4497-810e-d5bc6913be64": Phase="Pending", Reason="", readiness=false. Elapsed: 4.099287798s
Jan  1 13:41:03.692: INFO: Pod "pod-configmaps-6f3b1a68-b42d-4497-810e-d5bc6913be64": Phase="Pending", Reason="", readiness=false. Elapsed: 6.108743091s
Jan  1 13:41:05.700: INFO: Pod "pod-configmaps-6f3b1a68-b42d-4497-810e-d5bc6913be64": Phase="Pending", Reason="", readiness=false. Elapsed: 8.116762455s
Jan  1 13:41:07.709: INFO: Pod "pod-configmaps-6f3b1a68-b42d-4497-810e-d5bc6913be64": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.126092779s
STEP: Saw pod success
Jan  1 13:41:07.709: INFO: Pod "pod-configmaps-6f3b1a68-b42d-4497-810e-d5bc6913be64" satisfied condition "success or failure"
Jan  1 13:41:07.713: INFO: Trying to get logs from node iruya-node pod pod-configmaps-6f3b1a68-b42d-4497-810e-d5bc6913be64 container configmap-volume-test: 
STEP: delete the pod
Jan  1 13:41:07.792: INFO: Waiting for pod pod-configmaps-6f3b1a68-b42d-4497-810e-d5bc6913be64 to disappear
Jan  1 13:41:07.801: INFO: Pod pod-configmaps-6f3b1a68-b42d-4497-810e-d5bc6913be64 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 13:41:07.801: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7691" for this suite.
Jan  1 13:41:13.876: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 13:41:14.064: INFO: namespace configmap-7691 deletion completed in 6.25272327s

• [SLOW TEST:16.739 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 13:41:14.066: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Jan  1 13:41:30.425: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan  1 13:41:30.436: INFO: Pod pod-with-prestop-http-hook still exists
Jan  1 13:41:32.437: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan  1 13:41:32.444: INFO: Pod pod-with-prestop-http-hook still exists
Jan  1 13:41:34.437: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan  1 13:41:34.459: INFO: Pod pod-with-prestop-http-hook still exists
Jan  1 13:41:36.437: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan  1 13:41:36.450: INFO: Pod pod-with-prestop-http-hook still exists
Jan  1 13:41:38.437: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan  1 13:41:38.454: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 13:41:38.553: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-953" for this suite.
Jan  1 13:42:00.626: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 13:42:00.797: INFO: namespace container-lifecycle-hook-953 deletion completed in 22.216710267s

• [SLOW TEST:46.731 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 13:42:00.798: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name cm-test-opt-del-52ca1efa-7aae-4b2c-b842-c824a48d8ed7
STEP: Creating configMap with name cm-test-opt-upd-91184922-15d8-440d-b009-1ae7ed170cfa
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-52ca1efa-7aae-4b2c-b842-c824a48d8ed7
STEP: Updating configmap cm-test-opt-upd-91184922-15d8-440d-b009-1ae7ed170cfa
STEP: Creating configMap with name cm-test-opt-create-09e3058f-664f-49e2-a06f-8bed64e146fb
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 13:42:17.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3937" for this suite.
Jan  1 13:42:39.906: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 13:42:40.069: INFO: namespace configmap-3937 deletion completed in 22.189605187s

• [SLOW TEST:39.271 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 13:42:40.069: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-map-e900d6e3-5b1f-4fad-b03f-5d0081a70239
STEP: Creating a pod to test consume secrets
Jan  1 13:42:40.229: INFO: Waiting up to 5m0s for pod "pod-secrets-e9d1f384-fe20-4605-a4f4-34b136ec13a7" in namespace "secrets-405" to be "success or failure"
Jan  1 13:42:40.259: INFO: Pod "pod-secrets-e9d1f384-fe20-4605-a4f4-34b136ec13a7": Phase="Pending", Reason="", readiness=false. Elapsed: 29.504101ms
Jan  1 13:42:42.291: INFO: Pod "pod-secrets-e9d1f384-fe20-4605-a4f4-34b136ec13a7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060936051s
Jan  1 13:42:44.307: INFO: Pod "pod-secrets-e9d1f384-fe20-4605-a4f4-34b136ec13a7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.077324968s
Jan  1 13:42:46.316: INFO: Pod "pod-secrets-e9d1f384-fe20-4605-a4f4-34b136ec13a7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.086721256s
Jan  1 13:42:48.335: INFO: Pod "pod-secrets-e9d1f384-fe20-4605-a4f4-34b136ec13a7": Phase="Running", Reason="", readiness=true. Elapsed: 8.10553202s
Jan  1 13:42:50.347: INFO: Pod "pod-secrets-e9d1f384-fe20-4605-a4f4-34b136ec13a7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.117335182s
STEP: Saw pod success
Jan  1 13:42:50.347: INFO: Pod "pod-secrets-e9d1f384-fe20-4605-a4f4-34b136ec13a7" satisfied condition "success or failure"
Jan  1 13:42:50.351: INFO: Trying to get logs from node iruya-node pod pod-secrets-e9d1f384-fe20-4605-a4f4-34b136ec13a7 container secret-volume-test: 
STEP: delete the pod
Jan  1 13:42:50.418: INFO: Waiting for pod pod-secrets-e9d1f384-fe20-4605-a4f4-34b136ec13a7 to disappear
Jan  1 13:42:50.439: INFO: Pod pod-secrets-e9d1f384-fe20-4605-a4f4-34b136ec13a7 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 13:42:50.440: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-405" for this suite.
Jan  1 13:42:56.503: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 13:42:56.633: INFO: namespace secrets-405 deletion completed in 6.184401017s

• [SLOW TEST:16.564 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 13:42:56.635: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan  1 13:42:56.741: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fbded6ac-5e12-46ca-aa9b-417d100df327" in namespace "downward-api-626" to be "success or failure"
Jan  1 13:42:56.766: INFO: Pod "downwardapi-volume-fbded6ac-5e12-46ca-aa9b-417d100df327": Phase="Pending", Reason="", readiness=false. Elapsed: 24.631873ms
Jan  1 13:42:58.778: INFO: Pod "downwardapi-volume-fbded6ac-5e12-46ca-aa9b-417d100df327": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036725637s
Jan  1 13:43:00.811: INFO: Pod "downwardapi-volume-fbded6ac-5e12-46ca-aa9b-417d100df327": Phase="Pending", Reason="", readiness=false. Elapsed: 4.070185545s
Jan  1 13:43:02.818: INFO: Pod "downwardapi-volume-fbded6ac-5e12-46ca-aa9b-417d100df327": Phase="Pending", Reason="", readiness=false. Elapsed: 6.077063748s
Jan  1 13:43:04.829: INFO: Pod "downwardapi-volume-fbded6ac-5e12-46ca-aa9b-417d100df327": Phase="Pending", Reason="", readiness=false. Elapsed: 8.08769681s
Jan  1 13:43:06.839: INFO: Pod "downwardapi-volume-fbded6ac-5e12-46ca-aa9b-417d100df327": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.09758661s
STEP: Saw pod success
Jan  1 13:43:06.839: INFO: Pod "downwardapi-volume-fbded6ac-5e12-46ca-aa9b-417d100df327" satisfied condition "success or failure"
Jan  1 13:43:06.843: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-fbded6ac-5e12-46ca-aa9b-417d100df327 container client-container: 
STEP: delete the pod
Jan  1 13:43:06.927: INFO: Waiting for pod downwardapi-volume-fbded6ac-5e12-46ca-aa9b-417d100df327 to disappear
Jan  1 13:43:06.935: INFO: Pod downwardapi-volume-fbded6ac-5e12-46ca-aa9b-417d100df327 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 13:43:06.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-626" for this suite.
Jan  1 13:43:12.971: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 13:43:13.103: INFO: namespace downward-api-626 deletion completed in 6.159347645s

• [SLOW TEST:16.468 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 13:43:13.104: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
Jan  1 13:43:13.202: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-3556,SelfLink:/api/v1/namespaces/watch-3556/configmaps/e2e-watch-test-resource-version,UID:fe76c026-73be-4776-8333-782829271bad,ResourceVersion:18898394,Generation:0,CreationTimestamp:2020-01-01 13:43:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan  1 13:43:13.203: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-3556,SelfLink:/api/v1/namespaces/watch-3556/configmaps/e2e-watch-test-resource-version,UID:fe76c026-73be-4776-8333-782829271bad,ResourceVersion:18898395,Generation:0,CreationTimestamp:2020-01-01 13:43:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 13:43:13.203: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-3556" for this suite.
Jan  1 13:43:19.276: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 13:43:19.441: INFO: namespace watch-3556 deletion completed in 6.235065238s

• [SLOW TEST:6.338 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 13:43:19.443: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan  1 13:43:19.611: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 13:43:20.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-2746" for this suite.
Jan  1 13:43:26.345: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 13:43:26.540: INFO: namespace custom-resource-definition-2746 deletion completed in 6.2210811s

• [SLOW TEST:7.098 seconds]
[sig-api-machinery] CustomResourceDefinition resources
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Simple CustomResourceDefinition
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35
    creating/deleting custom resource definition objects works  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 13:43:26.544: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-configmap-r6sx
STEP: Creating a pod to test atomic-volume-subpath
Jan  1 13:43:26.720: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-r6sx" in namespace "subpath-1684" to be "success or failure"
Jan  1 13:43:26.734: INFO: Pod "pod-subpath-test-configmap-r6sx": Phase="Pending", Reason="", readiness=false. Elapsed: 13.047664ms
Jan  1 13:43:28.743: INFO: Pod "pod-subpath-test-configmap-r6sx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022545084s
Jan  1 13:43:30.752: INFO: Pod "pod-subpath-test-configmap-r6sx": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031391697s
Jan  1 13:43:32.769: INFO: Pod "pod-subpath-test-configmap-r6sx": Phase="Pending", Reason="", readiness=false. Elapsed: 6.048373581s
Jan  1 13:43:34.800: INFO: Pod "pod-subpath-test-configmap-r6sx": Phase="Running", Reason="", readiness=true. Elapsed: 8.079056599s
Jan  1 13:43:36.809: INFO: Pod "pod-subpath-test-configmap-r6sx": Phase="Running", Reason="", readiness=true. Elapsed: 10.088364285s
Jan  1 13:43:38.818: INFO: Pod "pod-subpath-test-configmap-r6sx": Phase="Running", Reason="", readiness=true. Elapsed: 12.097609139s
Jan  1 13:43:40.831: INFO: Pod "pod-subpath-test-configmap-r6sx": Phase="Running", Reason="", readiness=true. Elapsed: 14.110198606s
Jan  1 13:43:42.840: INFO: Pod "pod-subpath-test-configmap-r6sx": Phase="Running", Reason="", readiness=true. Elapsed: 16.118912371s
Jan  1 13:43:44.852: INFO: Pod "pod-subpath-test-configmap-r6sx": Phase="Running", Reason="", readiness=true. Elapsed: 18.131386138s
Jan  1 13:43:46.865: INFO: Pod "pod-subpath-test-configmap-r6sx": Phase="Running", Reason="", readiness=true. Elapsed: 20.14397346s
Jan  1 13:43:48.879: INFO: Pod "pod-subpath-test-configmap-r6sx": Phase="Running", Reason="", readiness=true. Elapsed: 22.15840745s
Jan  1 13:43:50.891: INFO: Pod "pod-subpath-test-configmap-r6sx": Phase="Running", Reason="", readiness=true. Elapsed: 24.169971233s
Jan  1 13:43:52.901: INFO: Pod "pod-subpath-test-configmap-r6sx": Phase="Running", Reason="", readiness=true. Elapsed: 26.180503541s
Jan  1 13:43:54.911: INFO: Pod "pod-subpath-test-configmap-r6sx": Phase="Running", Reason="", readiness=true. Elapsed: 28.190179895s
Jan  1 13:43:56.922: INFO: Pod "pod-subpath-test-configmap-r6sx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.201150953s
STEP: Saw pod success
Jan  1 13:43:56.922: INFO: Pod "pod-subpath-test-configmap-r6sx" satisfied condition "success or failure"
Jan  1 13:43:56.927: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-configmap-r6sx container test-container-subpath-configmap-r6sx: 
STEP: delete the pod
Jan  1 13:43:57.064: INFO: Waiting for pod pod-subpath-test-configmap-r6sx to disappear
Jan  1 13:43:57.140: INFO: Pod pod-subpath-test-configmap-r6sx no longer exists
STEP: Deleting pod pod-subpath-test-configmap-r6sx
Jan  1 13:43:57.140: INFO: Deleting pod "pod-subpath-test-configmap-r6sx" in namespace "subpath-1684"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 13:43:57.203: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-1684" for this suite.
Jan  1 13:44:03.577: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 13:44:03.733: INFO: namespace subpath-1684 deletion completed in 6.518736201s

• [SLOW TEST:37.190 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 13:44:03.735: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 pods, got 2 pods
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: Gathering metrics
W0101 13:44:06.032369       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan  1 13:44:06.032: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 13:44:06.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-5864" for this suite.
Jan  1 13:44:12.879: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 13:44:13.099: INFO: namespace gc-5864 deletion completed in 7.061721239s

• [SLOW TEST:9.364 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 13:44:13.100: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W0101 13:44:23.287453       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan  1 13:44:23.287: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 13:44:23.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-4825" for this suite.
Jan  1 13:44:29.323: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 13:44:29.446: INFO: namespace gc-4825 deletion completed in 6.152341354s

• [SLOW TEST:16.347 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 13:44:29.447: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir volume type on node default medium
Jan  1 13:44:29.517: INFO: Waiting up to 5m0s for pod "pod-7e382e3f-8c03-4088-bf9f-bca934ba6653" in namespace "emptydir-2218" to be "success or failure"
Jan  1 13:44:29.522: INFO: Pod "pod-7e382e3f-8c03-4088-bf9f-bca934ba6653": Phase="Pending", Reason="", readiness=false. Elapsed: 5.248264ms
Jan  1 13:44:31.532: INFO: Pod "pod-7e382e3f-8c03-4088-bf9f-bca934ba6653": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014750913s
Jan  1 13:44:33.538: INFO: Pod "pod-7e382e3f-8c03-4088-bf9f-bca934ba6653": Phase="Pending", Reason="", readiness=false. Elapsed: 4.021601972s
Jan  1 13:44:35.546: INFO: Pod "pod-7e382e3f-8c03-4088-bf9f-bca934ba6653": Phase="Pending", Reason="", readiness=false. Elapsed: 6.02892734s
Jan  1 13:44:37.554: INFO: Pod "pod-7e382e3f-8c03-4088-bf9f-bca934ba6653": Phase="Pending", Reason="", readiness=false. Elapsed: 8.037035185s
Jan  1 13:44:39.563: INFO: Pod "pod-7e382e3f-8c03-4088-bf9f-bca934ba6653": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.046251147s
STEP: Saw pod success
Jan  1 13:44:39.563: INFO: Pod "pod-7e382e3f-8c03-4088-bf9f-bca934ba6653" satisfied condition "success or failure"
Jan  1 13:44:39.568: INFO: Trying to get logs from node iruya-node pod pod-7e382e3f-8c03-4088-bf9f-bca934ba6653 container test-container: 
STEP: delete the pod
Jan  1 13:44:39.682: INFO: Waiting for pod pod-7e382e3f-8c03-4088-bf9f-bca934ba6653 to disappear
Jan  1 13:44:39.753: INFO: Pod pod-7e382e3f-8c03-4088-bf9f-bca934ba6653 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 13:44:39.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2218" for this suite.
Jan  1 13:44:45.796: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 13:44:45.944: INFO: namespace emptydir-2218 deletion completed in 6.17718125s

• [SLOW TEST:16.498 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 13:44:45.945: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-88b1bb81-5ef3-4b2f-9558-a6ba37730793
STEP: Creating a pod to test consume secrets
Jan  1 13:44:46.039: INFO: Waiting up to 5m0s for pod "pod-secrets-02cb287d-ec10-4ef5-8009-54fda552459e" in namespace "secrets-3224" to be "success or failure"
Jan  1 13:44:46.044: INFO: Pod "pod-secrets-02cb287d-ec10-4ef5-8009-54fda552459e": Phase="Pending", Reason="", readiness=false. Elapsed: 5.212537ms
Jan  1 13:44:48.054: INFO: Pod "pod-secrets-02cb287d-ec10-4ef5-8009-54fda552459e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01536745s
Jan  1 13:44:50.062: INFO: Pod "pod-secrets-02cb287d-ec10-4ef5-8009-54fda552459e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023122639s
Jan  1 13:44:52.070: INFO: Pod "pod-secrets-02cb287d-ec10-4ef5-8009-54fda552459e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.031261598s
Jan  1 13:44:54.089: INFO: Pod "pod-secrets-02cb287d-ec10-4ef5-8009-54fda552459e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.050324995s
Jan  1 13:44:56.102: INFO: Pod "pod-secrets-02cb287d-ec10-4ef5-8009-54fda552459e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.063639791s
STEP: Saw pod success
Jan  1 13:44:56.102: INFO: Pod "pod-secrets-02cb287d-ec10-4ef5-8009-54fda552459e" satisfied condition "success or failure"
Jan  1 13:44:56.107: INFO: Trying to get logs from node iruya-node pod pod-secrets-02cb287d-ec10-4ef5-8009-54fda552459e container secret-volume-test: 
STEP: delete the pod
Jan  1 13:44:56.207: INFO: Waiting for pod pod-secrets-02cb287d-ec10-4ef5-8009-54fda552459e to disappear
Jan  1 13:44:56.219: INFO: Pod pod-secrets-02cb287d-ec10-4ef5-8009-54fda552459e no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 13:44:56.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-3224" for this suite.
Jan  1 13:45:02.257: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 13:45:02.466: INFO: namespace secrets-3224 deletion completed in 6.233407658s

• [SLOW TEST:16.521 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 13:45:02.467: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-739ff065-af62-4479-adcb-a24735a28d37
STEP: Creating a pod to test consume secrets
Jan  1 13:45:02.641: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-376cc40f-1cc0-49b3-92d8-b7db01bdc0a8" in namespace "projected-7244" to be "success or failure"
Jan  1 13:45:02.649: INFO: Pod "pod-projected-secrets-376cc40f-1cc0-49b3-92d8-b7db01bdc0a8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.46339ms
Jan  1 13:45:04.664: INFO: Pod "pod-projected-secrets-376cc40f-1cc0-49b3-92d8-b7db01bdc0a8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023109409s
Jan  1 13:45:06.673: INFO: Pod "pod-projected-secrets-376cc40f-1cc0-49b3-92d8-b7db01bdc0a8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03179971s
Jan  1 13:45:08.683: INFO: Pod "pod-projected-secrets-376cc40f-1cc0-49b3-92d8-b7db01bdc0a8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041574082s
Jan  1 13:45:10.695: INFO: Pod "pod-projected-secrets-376cc40f-1cc0-49b3-92d8-b7db01bdc0a8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.053814975s
STEP: Saw pod success
Jan  1 13:45:10.695: INFO: Pod "pod-projected-secrets-376cc40f-1cc0-49b3-92d8-b7db01bdc0a8" satisfied condition "success or failure"
Jan  1 13:45:10.698: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-376cc40f-1cc0-49b3-92d8-b7db01bdc0a8 container projected-secret-volume-test: 
STEP: delete the pod
Jan  1 13:45:10.786: INFO: Waiting for pod pod-projected-secrets-376cc40f-1cc0-49b3-92d8-b7db01bdc0a8 to disappear
Jan  1 13:45:10.831: INFO: Pod pod-projected-secrets-376cc40f-1cc0-49b3-92d8-b7db01bdc0a8 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 13:45:10.831: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7244" for this suite.
Jan  1 13:45:16.981: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 13:45:17.351: INFO: namespace projected-7244 deletion completed in 6.503794012s

• [SLOW TEST:14.884 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 13:45:17.353: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1685
[It] should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan  1 13:45:17.477: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-1020'
Jan  1 13:45:19.264: INFO: stderr: ""
Jan  1 13:45:19.264: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod was created
[AfterEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1690
Jan  1 13:45:19.297: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-1020'
Jan  1 13:45:24.235: INFO: stderr: ""
Jan  1 13:45:24.236: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 13:45:24.236: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1020" for this suite.
Jan  1 13:45:30.267: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 13:45:30.508: INFO: namespace kubectl-1020 deletion completed in 6.262702691s

• [SLOW TEST:13.155 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a pod from an image when restart is Never  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 13:45:30.509: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-20189717-5698-4345-997d-572962a3bcbc
STEP: Creating a pod to test consume secrets
Jan  1 13:45:30.635: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-9f2430fb-b08a-449b-abff-8e418ab5af7e" in namespace "projected-4413" to be "success or failure"
Jan  1 13:45:30.642: INFO: Pod "pod-projected-secrets-9f2430fb-b08a-449b-abff-8e418ab5af7e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.970721ms
Jan  1 13:45:32.655: INFO: Pod "pod-projected-secrets-9f2430fb-b08a-449b-abff-8e418ab5af7e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019471185s
Jan  1 13:45:34.664: INFO: Pod "pod-projected-secrets-9f2430fb-b08a-449b-abff-8e418ab5af7e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028680238s
Jan  1 13:45:36.677: INFO: Pod "pod-projected-secrets-9f2430fb-b08a-449b-abff-8e418ab5af7e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041854217s
Jan  1 13:45:38.695: INFO: Pod "pod-projected-secrets-9f2430fb-b08a-449b-abff-8e418ab5af7e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.059194283s
STEP: Saw pod success
Jan  1 13:45:38.695: INFO: Pod "pod-projected-secrets-9f2430fb-b08a-449b-abff-8e418ab5af7e" satisfied condition "success or failure"
Jan  1 13:45:38.699: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-9f2430fb-b08a-449b-abff-8e418ab5af7e container projected-secret-volume-test: 
STEP: delete the pod
Jan  1 13:45:38.755: INFO: Waiting for pod pod-projected-secrets-9f2430fb-b08a-449b-abff-8e418ab5af7e to disappear
Jan  1 13:45:38.763: INFO: Pod pod-projected-secrets-9f2430fb-b08a-449b-abff-8e418ab5af7e no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 13:45:38.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4413" for this suite.
Jan  1 13:45:44.865: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 13:45:44.974: INFO: namespace projected-4413 deletion completed in 6.138680735s

• [SLOW TEST:14.466 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 13:45:44.975: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan  1 13:45:45.104: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 13:45:53.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-6352" for this suite.
Jan  1 13:46:39.651: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 13:46:39.817: INFO: namespace pods-6352 deletion completed in 46.229778656s

• [SLOW TEST:54.843 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 13:46:39.818: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1721
[It] should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan  1 13:46:39.893: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=kubectl-8337'
Jan  1 13:46:40.064: INFO: stderr: ""
Jan  1 13:46:40.064: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod is running
STEP: verifying the pod e2e-test-nginx-pod was created
Jan  1 13:46:50.116: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=kubectl-8337 -o json'
Jan  1 13:46:50.302: INFO: stderr: ""
Jan  1 13:46:50.303: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2020-01-01T13:46:40Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-nginx-pod\"\n        },\n        \"name\": \"e2e-test-nginx-pod\",\n        \"namespace\": \"kubectl-8337\",\n        \"resourceVersion\": \"18898959\",\n        \"selfLink\": \"/api/v1/namespaces/kubectl-8337/pods/e2e-test-nginx-pod\",\n        \"uid\": \"edd4b92e-4b9e-4a2a-a184-78ff5b689441\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/nginx:1.14-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-nginx-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-tb7fq\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"iruya-node\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-tb7fq\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-tb7fq\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-01T13:46:40Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-01T13:46:48Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-01T13:46:48Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-01T13:46:40Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"docker://e5fcc9aaeea2b58e3b823f314d44539c627659b7a1f464091b47a51d9dfd064e\",\n                \"image\": \"nginx:1.14-alpine\",\n                \"imageID\": \"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-nginx-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2020-01-01T13:46:47Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"10.96.3.65\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.44.0.1\",\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2020-01-01T13:46:40Z\"\n    }\n}\n"
STEP: replace the image in the pod
Jan  1 13:46:50.304: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-8337'
Jan  1 13:46:50.816: INFO: stderr: ""
Jan  1 13:46:50.817: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n"
STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29
[AfterEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1726
Jan  1 13:46:50.825: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-8337'
Jan  1 13:46:59.629: INFO: stderr: ""
Jan  1 13:46:59.629: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 13:46:59.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8337" for this suite.
Jan  1 13:47:05.688: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 13:47:05.837: INFO: namespace kubectl-8337 deletion completed in 6.199495478s

• [SLOW TEST:26.019 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should update a single-container pod's image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 13:47:05.837: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test env composition
Jan  1 13:47:05.992: INFO: Waiting up to 5m0s for pod "var-expansion-8da3a053-5195-49cc-9443-0c073e1e8480" in namespace "var-expansion-4198" to be "success or failure"
Jan  1 13:47:06.033: INFO: Pod "var-expansion-8da3a053-5195-49cc-9443-0c073e1e8480": Phase="Pending", Reason="", readiness=false. Elapsed: 41.230774ms
Jan  1 13:47:08.040: INFO: Pod "var-expansion-8da3a053-5195-49cc-9443-0c073e1e8480": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048114514s
Jan  1 13:47:10.096: INFO: Pod "var-expansion-8da3a053-5195-49cc-9443-0c073e1e8480": Phase="Pending", Reason="", readiness=false. Elapsed: 4.104046816s
Jan  1 13:47:12.108: INFO: Pod "var-expansion-8da3a053-5195-49cc-9443-0c073e1e8480": Phase="Pending", Reason="", readiness=false. Elapsed: 6.115728373s
Jan  1 13:47:14.127: INFO: Pod "var-expansion-8da3a053-5195-49cc-9443-0c073e1e8480": Phase="Pending", Reason="", readiness=false. Elapsed: 8.134975981s
Jan  1 13:47:16.136: INFO: Pod "var-expansion-8da3a053-5195-49cc-9443-0c073e1e8480": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.143796315s
STEP: Saw pod success
Jan  1 13:47:16.136: INFO: Pod "var-expansion-8da3a053-5195-49cc-9443-0c073e1e8480" satisfied condition "success or failure"
Jan  1 13:47:16.140: INFO: Trying to get logs from node iruya-node pod var-expansion-8da3a053-5195-49cc-9443-0c073e1e8480 container dapi-container: 
STEP: delete the pod
Jan  1 13:47:16.328: INFO: Waiting for pod var-expansion-8da3a053-5195-49cc-9443-0c073e1e8480 to disappear
Jan  1 13:47:16.338: INFO: Pod var-expansion-8da3a053-5195-49cc-9443-0c073e1e8480 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 13:47:16.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-4198" for this suite.
Jan  1 13:47:22.433: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 13:47:22.612: INFO: namespace var-expansion-4198 deletion completed in 6.267641151s

• [SLOW TEST:16.775 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 13:47:22.617: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan  1 13:47:22.777: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d5d3df43-52a1-4eeb-bfb4-9698c8b00e11" in namespace "downward-api-9208" to be "success or failure"
Jan  1 13:47:22.796: INFO: Pod "downwardapi-volume-d5d3df43-52a1-4eeb-bfb4-9698c8b00e11": Phase="Pending", Reason="", readiness=false. Elapsed: 19.065208ms
Jan  1 13:47:24.807: INFO: Pod "downwardapi-volume-d5d3df43-52a1-4eeb-bfb4-9698c8b00e11": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029512641s
Jan  1 13:47:26.820: INFO: Pod "downwardapi-volume-d5d3df43-52a1-4eeb-bfb4-9698c8b00e11": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043035206s
Jan  1 13:47:28.867: INFO: Pod "downwardapi-volume-d5d3df43-52a1-4eeb-bfb4-9698c8b00e11": Phase="Pending", Reason="", readiness=false. Elapsed: 6.089646152s
Jan  1 13:47:30.885: INFO: Pod "downwardapi-volume-d5d3df43-52a1-4eeb-bfb4-9698c8b00e11": Phase="Pending", Reason="", readiness=false. Elapsed: 8.107772999s
Jan  1 13:47:32.905: INFO: Pod "downwardapi-volume-d5d3df43-52a1-4eeb-bfb4-9698c8b00e11": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.127761132s
STEP: Saw pod success
Jan  1 13:47:32.905: INFO: Pod "downwardapi-volume-d5d3df43-52a1-4eeb-bfb4-9698c8b00e11" satisfied condition "success or failure"
Jan  1 13:47:32.913: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-d5d3df43-52a1-4eeb-bfb4-9698c8b00e11 container client-container: 
STEP: delete the pod
Jan  1 13:47:33.036: INFO: Waiting for pod downwardapi-volume-d5d3df43-52a1-4eeb-bfb4-9698c8b00e11 to disappear
Jan  1 13:47:33.046: INFO: Pod downwardapi-volume-d5d3df43-52a1-4eeb-bfb4-9698c8b00e11 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 13:47:33.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9208" for this suite.
Jan  1 13:47:39.081: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 13:47:39.199: INFO: namespace downward-api-9208 deletion completed in 6.146825517s

• [SLOW TEST:16.582 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 13:47:39.199: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test substitution in container's args
Jan  1 13:47:39.316: INFO: Waiting up to 5m0s for pod "var-expansion-9009b65a-a502-4a90-b896-b2b41830bc87" in namespace "var-expansion-2375" to be "success or failure"
Jan  1 13:47:39.320: INFO: Pod "var-expansion-9009b65a-a502-4a90-b896-b2b41830bc87": Phase="Pending", Reason="", readiness=false. Elapsed: 4.094683ms
Jan  1 13:47:41.333: INFO: Pod "var-expansion-9009b65a-a502-4a90-b896-b2b41830bc87": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016699447s
Jan  1 13:47:43.344: INFO: Pod "var-expansion-9009b65a-a502-4a90-b896-b2b41830bc87": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027850918s
Jan  1 13:47:45.360: INFO: Pod "var-expansion-9009b65a-a502-4a90-b896-b2b41830bc87": Phase="Pending", Reason="", readiness=false. Elapsed: 6.044359022s
Jan  1 13:47:47.375: INFO: Pod "var-expansion-9009b65a-a502-4a90-b896-b2b41830bc87": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.058781748s
STEP: Saw pod success
Jan  1 13:47:47.375: INFO: Pod "var-expansion-9009b65a-a502-4a90-b896-b2b41830bc87" satisfied condition "success or failure"
Jan  1 13:47:47.378: INFO: Trying to get logs from node iruya-node pod var-expansion-9009b65a-a502-4a90-b896-b2b41830bc87 container dapi-container: 
STEP: delete the pod
Jan  1 13:47:47.439: INFO: Waiting for pod var-expansion-9009b65a-a502-4a90-b896-b2b41830bc87 to disappear
Jan  1 13:47:47.445: INFO: Pod var-expansion-9009b65a-a502-4a90-b896-b2b41830bc87 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 13:47:47.445: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-2375" for this suite.
Jan  1 13:47:53.472: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 13:47:53.606: INFO: namespace var-expansion-2375 deletion completed in 6.15394262s

• [SLOW TEST:14.406 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 13:47:53.607: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-5469.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-5469.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5469.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-5469.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-5469.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5469.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe /etc/hosts
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan  1 13:48:08.007: INFO: Unable to read wheezy_udp@PodARecord from pod dns-5469/dns-test-f2b006c2-06a0-4600-92ca-1519f5ef1da1: the server could not find the requested resource (get pods dns-test-f2b006c2-06a0-4600-92ca-1519f5ef1da1)
Jan  1 13:48:08.017: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-5469/dns-test-f2b006c2-06a0-4600-92ca-1519f5ef1da1: the server could not find the requested resource (get pods dns-test-f2b006c2-06a0-4600-92ca-1519f5ef1da1)
Jan  1 13:48:08.023: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.dns-5469.svc.cluster.local from pod dns-5469/dns-test-f2b006c2-06a0-4600-92ca-1519f5ef1da1: the server could not find the requested resource (get pods dns-test-f2b006c2-06a0-4600-92ca-1519f5ef1da1)
Jan  1 13:48:08.031: INFO: Unable to read jessie_hosts@dns-querier-1 from pod dns-5469/dns-test-f2b006c2-06a0-4600-92ca-1519f5ef1da1: the server could not find the requested resource (get pods dns-test-f2b006c2-06a0-4600-92ca-1519f5ef1da1)
Jan  1 13:48:08.035: INFO: Unable to read jessie_udp@PodARecord from pod dns-5469/dns-test-f2b006c2-06a0-4600-92ca-1519f5ef1da1: the server could not find the requested resource (get pods dns-test-f2b006c2-06a0-4600-92ca-1519f5ef1da1)
Jan  1 13:48:08.039: INFO: Unable to read jessie_tcp@PodARecord from pod dns-5469/dns-test-f2b006c2-06a0-4600-92ca-1519f5ef1da1: the server could not find the requested resource (get pods dns-test-f2b006c2-06a0-4600-92ca-1519f5ef1da1)
Jan  1 13:48:08.039: INFO: Lookups using dns-5469/dns-test-f2b006c2-06a0-4600-92ca-1519f5ef1da1 failed for: [wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_hosts@dns-querier-1.dns-test-service.dns-5469.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord]

Jan  1 13:48:13.087: INFO: DNS probes using dns-5469/dns-test-f2b006c2-06a0-4600-92ca-1519f5ef1da1 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 13:48:13.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-5469" for this suite.
Jan  1 13:48:19.300: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 13:48:19.473: INFO: namespace dns-5469 deletion completed in 6.242300974s

• [SLOW TEST:25.866 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 13:48:19.474: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan  1 13:48:19.615: INFO: (0) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 15.973369ms)
Jan  1 13:48:19.621: INFO: (1) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.567063ms)
Jan  1 13:48:19.627: INFO: (2) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.131282ms)
Jan  1 13:48:19.634: INFO: (3) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.307787ms)
Jan  1 13:48:19.640: INFO: (4) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.485482ms)
Jan  1 13:48:19.647: INFO: (5) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.077832ms)
Jan  1 13:48:19.655: INFO: (6) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.417146ms)
Jan  1 13:48:19.660: INFO: (7) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.007592ms)
Jan  1 13:48:19.665: INFO: (8) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.291259ms)
Jan  1 13:48:19.670: INFO: (9) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.417213ms)
Jan  1 13:48:19.675: INFO: (10) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.419852ms)
Jan  1 13:48:19.680: INFO: (11) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.196306ms)
Jan  1 13:48:19.686: INFO: (12) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.576582ms)
Jan  1 13:48:19.691: INFO: (13) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.445452ms)
Jan  1 13:48:19.720: INFO: (14) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 28.857995ms)
Jan  1 13:48:19.726: INFO: (15) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.934932ms)
Jan  1 13:48:19.733: INFO: (16) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.392218ms)
Jan  1 13:48:19.739: INFO: (17) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.31866ms)
Jan  1 13:48:19.744: INFO: (18) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.020388ms)
Jan  1 13:48:19.752: INFO: (19) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.085622ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 13:48:19.752: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-1467" for this suite.
Jan  1 13:48:25.783: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 13:48:25.953: INFO: namespace proxy-1467 deletion completed in 6.195649422s

• [SLOW TEST:6.479 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy logs on node using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 13:48:25.954: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8503.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-8503.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8503.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-8503.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-8503.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-8503.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-8503.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-8503.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-8503.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-8503.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-8503.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-8503.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8503.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 24.150.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.150.24_udp@PTR;check="$$(dig +tcp +noall +answer +search 24.150.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.150.24_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8503.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-8503.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8503.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-8503.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-8503.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-8503.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-8503.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-8503.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-8503.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-8503.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-8503.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-8503.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8503.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 24.150.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.150.24_udp@PTR;check="$$(dig +tcp +noall +answer +search 24.150.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.150.24_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan  1 13:48:40.209: INFO: Unable to read wheezy_udp@dns-test-service.dns-8503.svc.cluster.local from pod dns-8503/dns-test-d222f535-f200-4d74-b55b-48f27872a893: the server could not find the requested resource (get pods dns-test-d222f535-f200-4d74-b55b-48f27872a893)
Jan  1 13:48:40.252: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8503.svc.cluster.local from pod dns-8503/dns-test-d222f535-f200-4d74-b55b-48f27872a893: the server could not find the requested resource (get pods dns-test-d222f535-f200-4d74-b55b-48f27872a893)
Jan  1 13:48:40.264: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8503.svc.cluster.local from pod dns-8503/dns-test-d222f535-f200-4d74-b55b-48f27872a893: the server could not find the requested resource (get pods dns-test-d222f535-f200-4d74-b55b-48f27872a893)
Jan  1 13:48:40.274: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8503.svc.cluster.local from pod dns-8503/dns-test-d222f535-f200-4d74-b55b-48f27872a893: the server could not find the requested resource (get pods dns-test-d222f535-f200-4d74-b55b-48f27872a893)
Jan  1 13:48:40.280: INFO: Unable to read wheezy_udp@_http._tcp.test-service-2.dns-8503.svc.cluster.local from pod dns-8503/dns-test-d222f535-f200-4d74-b55b-48f27872a893: the server could not find the requested resource (get pods dns-test-d222f535-f200-4d74-b55b-48f27872a893)
Jan  1 13:48:40.285: INFO: Unable to read wheezy_tcp@_http._tcp.test-service-2.dns-8503.svc.cluster.local from pod dns-8503/dns-test-d222f535-f200-4d74-b55b-48f27872a893: the server could not find the requested resource (get pods dns-test-d222f535-f200-4d74-b55b-48f27872a893)
Jan  1 13:48:40.292: INFO: Unable to read wheezy_udp@PodARecord from pod dns-8503/dns-test-d222f535-f200-4d74-b55b-48f27872a893: the server could not find the requested resource (get pods dns-test-d222f535-f200-4d74-b55b-48f27872a893)
Jan  1 13:48:40.298: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-8503/dns-test-d222f535-f200-4d74-b55b-48f27872a893: the server could not find the requested resource (get pods dns-test-d222f535-f200-4d74-b55b-48f27872a893)
Jan  1 13:48:40.305: INFO: Unable to read 10.107.150.24_udp@PTR from pod dns-8503/dns-test-d222f535-f200-4d74-b55b-48f27872a893: the server could not find the requested resource (get pods dns-test-d222f535-f200-4d74-b55b-48f27872a893)
Jan  1 13:48:40.313: INFO: Unable to read 10.107.150.24_tcp@PTR from pod dns-8503/dns-test-d222f535-f200-4d74-b55b-48f27872a893: the server could not find the requested resource (get pods dns-test-d222f535-f200-4d74-b55b-48f27872a893)
Jan  1 13:48:40.317: INFO: Unable to read jessie_udp@dns-test-service.dns-8503.svc.cluster.local from pod dns-8503/dns-test-d222f535-f200-4d74-b55b-48f27872a893: the server could not find the requested resource (get pods dns-test-d222f535-f200-4d74-b55b-48f27872a893)
Jan  1 13:48:40.321: INFO: Unable to read jessie_tcp@dns-test-service.dns-8503.svc.cluster.local from pod dns-8503/dns-test-d222f535-f200-4d74-b55b-48f27872a893: the server could not find the requested resource (get pods dns-test-d222f535-f200-4d74-b55b-48f27872a893)
Jan  1 13:48:40.324: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8503.svc.cluster.local from pod dns-8503/dns-test-d222f535-f200-4d74-b55b-48f27872a893: the server could not find the requested resource (get pods dns-test-d222f535-f200-4d74-b55b-48f27872a893)
Jan  1 13:48:40.330: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8503.svc.cluster.local from pod dns-8503/dns-test-d222f535-f200-4d74-b55b-48f27872a893: the server could not find the requested resource (get pods dns-test-d222f535-f200-4d74-b55b-48f27872a893)
Jan  1 13:48:40.335: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.dns-8503.svc.cluster.local from pod dns-8503/dns-test-d222f535-f200-4d74-b55b-48f27872a893: the server could not find the requested resource (get pods dns-test-d222f535-f200-4d74-b55b-48f27872a893)
Jan  1 13:48:40.338: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.dns-8503.svc.cluster.local from pod dns-8503/dns-test-d222f535-f200-4d74-b55b-48f27872a893: the server could not find the requested resource (get pods dns-test-d222f535-f200-4d74-b55b-48f27872a893)
Jan  1 13:48:40.341: INFO: Unable to read jessie_udp@PodARecord from pod dns-8503/dns-test-d222f535-f200-4d74-b55b-48f27872a893: the server could not find the requested resource (get pods dns-test-d222f535-f200-4d74-b55b-48f27872a893)
Jan  1 13:48:40.344: INFO: Unable to read jessie_tcp@PodARecord from pod dns-8503/dns-test-d222f535-f200-4d74-b55b-48f27872a893: the server could not find the requested resource (get pods dns-test-d222f535-f200-4d74-b55b-48f27872a893)
Jan  1 13:48:40.349: INFO: Unable to read 10.107.150.24_udp@PTR from pod dns-8503/dns-test-d222f535-f200-4d74-b55b-48f27872a893: the server could not find the requested resource (get pods dns-test-d222f535-f200-4d74-b55b-48f27872a893)
Jan  1 13:48:40.354: INFO: Unable to read 10.107.150.24_tcp@PTR from pod dns-8503/dns-test-d222f535-f200-4d74-b55b-48f27872a893: the server could not find the requested resource (get pods dns-test-d222f535-f200-4d74-b55b-48f27872a893)
Jan  1 13:48:40.354: INFO: Lookups using dns-8503/dns-test-d222f535-f200-4d74-b55b-48f27872a893 failed for: [wheezy_udp@dns-test-service.dns-8503.svc.cluster.local wheezy_tcp@dns-test-service.dns-8503.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8503.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8503.svc.cluster.local wheezy_udp@_http._tcp.test-service-2.dns-8503.svc.cluster.local wheezy_tcp@_http._tcp.test-service-2.dns-8503.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord 10.107.150.24_udp@PTR 10.107.150.24_tcp@PTR jessie_udp@dns-test-service.dns-8503.svc.cluster.local jessie_tcp@dns-test-service.dns-8503.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8503.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8503.svc.cluster.local jessie_udp@_http._tcp.test-service-2.dns-8503.svc.cluster.local jessie_tcp@_http._tcp.test-service-2.dns-8503.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord 10.107.150.24_udp@PTR 10.107.150.24_tcp@PTR]

Jan  1 13:48:45.800: INFO: DNS probes using dns-8503/dns-test-d222f535-f200-4d74-b55b-48f27872a893 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 13:48:46.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-8503" for this suite.
Jan  1 13:48:52.240: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 13:48:52.357: INFO: namespace dns-8503 deletion completed in 6.228854246s

• [SLOW TEST:26.403 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 13:48:52.357: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 13:49:47.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-2685" for this suite.
Jan  1 13:49:53.839: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 13:49:54.009: INFO: namespace container-runtime-2685 deletion completed in 6.240726468s

• [SLOW TEST:61.653 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    when starting a container that exits
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39
      should run with the expected status [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-apps] Job 
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 13:49:54.011: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a job
STEP: Ensuring active pods == parallelism
STEP: delete a job
STEP: deleting Job.batch foo in namespace job-9746, will wait for the garbage collector to delete the pods
Jan  1 13:50:06.200: INFO: Deleting Job.batch foo took: 16.956513ms
Jan  1 13:50:06.501: INFO: Terminating Job.batch foo pods took: 300.453762ms
STEP: Ensuring job was deleted
[AfterEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 13:50:46.629: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-9746" for this suite.
Jan  1 13:50:52.653: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 13:50:52.744: INFO: namespace job-9746 deletion completed in 6.108256172s

• [SLOW TEST:58.733 seconds]
[sig-apps] Job
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 13:50:52.745: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Jan  1 13:51:01.466: INFO: Successfully updated pod "pod-update-activedeadlineseconds-d000dde9-d83d-4aa0-879e-0f2d18586bb4"
Jan  1 13:51:01.467: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-d000dde9-d83d-4aa0-879e-0f2d18586bb4" in namespace "pods-3033" to be "terminated due to deadline exceeded"
Jan  1 13:51:01.476: INFO: Pod "pod-update-activedeadlineseconds-d000dde9-d83d-4aa0-879e-0f2d18586bb4": Phase="Running", Reason="", readiness=true. Elapsed: 9.0126ms
Jan  1 13:51:03.486: INFO: Pod "pod-update-activedeadlineseconds-d000dde9-d83d-4aa0-879e-0f2d18586bb4": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.018551166s
Jan  1 13:51:03.486: INFO: Pod "pod-update-activedeadlineseconds-d000dde9-d83d-4aa0-879e-0f2d18586bb4" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 13:51:03.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-3033" for this suite.
Jan  1 13:51:09.529: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 13:51:09.695: INFO: namespace pods-3033 deletion completed in 6.20071336s

• [SLOW TEST:16.951 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 13:51:09.697: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Jan  1 13:51:18.963: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 13:51:19.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-2805" for this suite.
Jan  1 13:51:25.349: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 13:51:25.452: INFO: namespace container-runtime-2805 deletion completed in 6.247755291s

• [SLOW TEST:15.755 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 13:51:25.453: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-configmap-dx6h
STEP: Creating a pod to test atomic-volume-subpath
Jan  1 13:51:25.618: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-dx6h" in namespace "subpath-4270" to be "success or failure"
Jan  1 13:51:25.648: INFO: Pod "pod-subpath-test-configmap-dx6h": Phase="Pending", Reason="", readiness=false. Elapsed: 30.221252ms
Jan  1 13:51:27.659: INFO: Pod "pod-subpath-test-configmap-dx6h": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041543379s
Jan  1 13:51:29.692: INFO: Pod "pod-subpath-test-configmap-dx6h": Phase="Pending", Reason="", readiness=false. Elapsed: 4.073800194s
Jan  1 13:51:31.700: INFO: Pod "pod-subpath-test-configmap-dx6h": Phase="Pending", Reason="", readiness=false. Elapsed: 6.0824265s
Jan  1 13:51:33.711: INFO: Pod "pod-subpath-test-configmap-dx6h": Phase="Pending", Reason="", readiness=false. Elapsed: 8.093391665s
Jan  1 13:51:35.720: INFO: Pod "pod-subpath-test-configmap-dx6h": Phase="Running", Reason="", readiness=true. Elapsed: 10.1021484s
Jan  1 13:51:37.732: INFO: Pod "pod-subpath-test-configmap-dx6h": Phase="Running", Reason="", readiness=true. Elapsed: 12.114485018s
Jan  1 13:51:39.744: INFO: Pod "pod-subpath-test-configmap-dx6h": Phase="Running", Reason="", readiness=true. Elapsed: 14.125831799s
Jan  1 13:51:41.756: INFO: Pod "pod-subpath-test-configmap-dx6h": Phase="Running", Reason="", readiness=true. Elapsed: 16.137952074s
Jan  1 13:51:43.764: INFO: Pod "pod-subpath-test-configmap-dx6h": Phase="Running", Reason="", readiness=true. Elapsed: 18.14620214s
Jan  1 13:51:45.788: INFO: Pod "pod-subpath-test-configmap-dx6h": Phase="Running", Reason="", readiness=true. Elapsed: 20.170120545s
Jan  1 13:51:47.803: INFO: Pod "pod-subpath-test-configmap-dx6h": Phase="Running", Reason="", readiness=true. Elapsed: 22.18552986s
Jan  1 13:51:49.814: INFO: Pod "pod-subpath-test-configmap-dx6h": Phase="Running", Reason="", readiness=true. Elapsed: 24.195776488s
Jan  1 13:51:51.827: INFO: Pod "pod-subpath-test-configmap-dx6h": Phase="Running", Reason="", readiness=true. Elapsed: 26.20926223s
Jan  1 13:51:53.837: INFO: Pod "pod-subpath-test-configmap-dx6h": Phase="Running", Reason="", readiness=true. Elapsed: 28.218947482s
Jan  1 13:51:55.845: INFO: Pod "pod-subpath-test-configmap-dx6h": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.22700966s
STEP: Saw pod success
Jan  1 13:51:55.845: INFO: Pod "pod-subpath-test-configmap-dx6h" satisfied condition "success or failure"
Jan  1 13:51:55.858: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-configmap-dx6h container test-container-subpath-configmap-dx6h: 
STEP: delete the pod
Jan  1 13:51:56.093: INFO: Waiting for pod pod-subpath-test-configmap-dx6h to disappear
Jan  1 13:51:56.099: INFO: Pod pod-subpath-test-configmap-dx6h no longer exists
STEP: Deleting pod pod-subpath-test-configmap-dx6h
Jan  1 13:51:56.099: INFO: Deleting pod "pod-subpath-test-configmap-dx6h" in namespace "subpath-4270"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 13:51:56.102: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-4270" for this suite.
Jan  1 13:52:02.135: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 13:52:02.385: INFO: namespace subpath-4270 deletion completed in 6.277389902s

• [SLOW TEST:36.932 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-api-machinery] Watchers 
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 13:52:02.385: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting a background goroutine to produce watch events
STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 13:52:07.826: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-4246" for this suite.
Jan  1 13:52:13.961: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 13:52:14.100: INFO: namespace watch-4246 deletion completed in 6.247962173s

• [SLOW TEST:11.715 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 13:52:14.101: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-map-3fd48ac4-b853-45c6-a2aa-21e794f45a69
STEP: Creating a pod to test consume secrets
Jan  1 13:52:14.346: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-9e43b62a-9364-4daa-a684-e60f185b3646" in namespace "projected-258" to be "success or failure"
Jan  1 13:52:14.375: INFO: Pod "pod-projected-secrets-9e43b62a-9364-4daa-a684-e60f185b3646": Phase="Pending", Reason="", readiness=false. Elapsed: 28.518887ms
Jan  1 13:52:16.393: INFO: Pod "pod-projected-secrets-9e43b62a-9364-4daa-a684-e60f185b3646": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046349838s
Jan  1 13:52:18.405: INFO: Pod "pod-projected-secrets-9e43b62a-9364-4daa-a684-e60f185b3646": Phase="Pending", Reason="", readiness=false. Elapsed: 4.058276238s
Jan  1 13:52:20.413: INFO: Pod "pod-projected-secrets-9e43b62a-9364-4daa-a684-e60f185b3646": Phase="Pending", Reason="", readiness=false. Elapsed: 6.06680755s
Jan  1 13:52:22.448: INFO: Pod "pod-projected-secrets-9e43b62a-9364-4daa-a684-e60f185b3646": Phase="Pending", Reason="", readiness=false. Elapsed: 8.101718555s
Jan  1 13:52:24.457: INFO: Pod "pod-projected-secrets-9e43b62a-9364-4daa-a684-e60f185b3646": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.110542395s
STEP: Saw pod success
Jan  1 13:52:24.457: INFO: Pod "pod-projected-secrets-9e43b62a-9364-4daa-a684-e60f185b3646" satisfied condition "success or failure"
Jan  1 13:52:24.461: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-9e43b62a-9364-4daa-a684-e60f185b3646 container projected-secret-volume-test: 
STEP: delete the pod
Jan  1 13:52:24.641: INFO: Waiting for pod pod-projected-secrets-9e43b62a-9364-4daa-a684-e60f185b3646 to disappear
Jan  1 13:52:24.700: INFO: Pod pod-projected-secrets-9e43b62a-9364-4daa-a684-e60f185b3646 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 13:52:24.700: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-258" for this suite.
Jan  1 13:52:30.731: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 13:52:30.876: INFO: namespace projected-258 deletion completed in 6.168171472s

• [SLOW TEST:16.776 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 13:52:30.878: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test hostPath mode
Jan  1 13:52:31.014: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-1825" to be "success or failure"
Jan  1 13:52:31.025: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 10.908389ms
Jan  1 13:52:33.034: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019752422s
Jan  1 13:52:35.062: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.048081165s
Jan  1 13:52:37.072: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.057988866s
Jan  1 13:52:39.079: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.065074441s
Jan  1 13:52:41.098: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.083827866s
STEP: Saw pod success
Jan  1 13:52:41.098: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
Jan  1 13:52:41.109: INFO: Trying to get logs from node iruya-node pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Jan  1 13:52:41.183: INFO: Waiting for pod pod-host-path-test to disappear
Jan  1 13:52:41.193: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 13:52:41.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "hostpath-1825" for this suite.
Jan  1 13:52:47.222: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 13:52:47.393: INFO: namespace hostpath-1825 deletion completed in 6.195348662s

• [SLOW TEST:16.516 seconds]
[sig-storage] HostPath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 13:52:47.395: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Jan  1 13:52:47.661: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 13:52:59.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-7589" for this suite.
Jan  1 13:53:05.672: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 13:53:05.830: INFO: namespace init-container-7589 deletion completed in 6.214877961s

• [SLOW TEST:18.435 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 13:53:05.831: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on node default medium
Jan  1 13:53:06.031: INFO: Waiting up to 5m0s for pod "pod-24ebea17-91d3-4ed9-8960-0fd19fe33ba7" in namespace "emptydir-3021" to be "success or failure"
Jan  1 13:53:06.045: INFO: Pod "pod-24ebea17-91d3-4ed9-8960-0fd19fe33ba7": Phase="Pending", Reason="", readiness=false. Elapsed: 13.34911ms
Jan  1 13:53:08.054: INFO: Pod "pod-24ebea17-91d3-4ed9-8960-0fd19fe33ba7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022778538s
Jan  1 13:53:10.072: INFO: Pod "pod-24ebea17-91d3-4ed9-8960-0fd19fe33ba7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040111007s
Jan  1 13:53:12.080: INFO: Pod "pod-24ebea17-91d3-4ed9-8960-0fd19fe33ba7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.048287323s
Jan  1 13:53:14.089: INFO: Pod "pod-24ebea17-91d3-4ed9-8960-0fd19fe33ba7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.057039246s
STEP: Saw pod success
Jan  1 13:53:14.089: INFO: Pod "pod-24ebea17-91d3-4ed9-8960-0fd19fe33ba7" satisfied condition "success or failure"
Jan  1 13:53:14.093: INFO: Trying to get logs from node iruya-node pod pod-24ebea17-91d3-4ed9-8960-0fd19fe33ba7 container test-container: 
STEP: delete the pod
Jan  1 13:53:14.276: INFO: Waiting for pod pod-24ebea17-91d3-4ed9-8960-0fd19fe33ba7 to disappear
Jan  1 13:53:14.296: INFO: Pod pod-24ebea17-91d3-4ed9-8960-0fd19fe33ba7 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 13:53:14.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3021" for this suite.
Jan  1 13:53:20.375: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 13:53:20.520: INFO: namespace emptydir-3021 deletion completed in 6.207173815s

• [SLOW TEST:14.690 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 13:53:20.521: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-projected-wsdm
STEP: Creating a pod to test atomic-volume-subpath
Jan  1 13:53:20.736: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-wsdm" in namespace "subpath-7742" to be "success or failure"
Jan  1 13:53:20.746: INFO: Pod "pod-subpath-test-projected-wsdm": Phase="Pending", Reason="", readiness=false. Elapsed: 9.980809ms
Jan  1 13:53:22.756: INFO: Pod "pod-subpath-test-projected-wsdm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020018851s
Jan  1 13:53:24.763: INFO: Pod "pod-subpath-test-projected-wsdm": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026723518s
Jan  1 13:53:26.775: INFO: Pod "pod-subpath-test-projected-wsdm": Phase="Pending", Reason="", readiness=false. Elapsed: 6.039096947s
Jan  1 13:53:28.803: INFO: Pod "pod-subpath-test-projected-wsdm": Phase="Running", Reason="", readiness=true. Elapsed: 8.067084747s
Jan  1 13:53:30.814: INFO: Pod "pod-subpath-test-projected-wsdm": Phase="Running", Reason="", readiness=true. Elapsed: 10.077713467s
Jan  1 13:53:32.826: INFO: Pod "pod-subpath-test-projected-wsdm": Phase="Running", Reason="", readiness=true. Elapsed: 12.089584053s
Jan  1 13:53:34.837: INFO: Pod "pod-subpath-test-projected-wsdm": Phase="Running", Reason="", readiness=true. Elapsed: 14.101329193s
Jan  1 13:53:36.851: INFO: Pod "pod-subpath-test-projected-wsdm": Phase="Running", Reason="", readiness=true. Elapsed: 16.114695445s
Jan  1 13:53:38.866: INFO: Pod "pod-subpath-test-projected-wsdm": Phase="Running", Reason="", readiness=true. Elapsed: 18.129927523s
Jan  1 13:53:40.878: INFO: Pod "pod-subpath-test-projected-wsdm": Phase="Running", Reason="", readiness=true. Elapsed: 20.14183624s
Jan  1 13:53:42.887: INFO: Pod "pod-subpath-test-projected-wsdm": Phase="Running", Reason="", readiness=true. Elapsed: 22.151106969s
Jan  1 13:53:44.893: INFO: Pod "pod-subpath-test-projected-wsdm": Phase="Running", Reason="", readiness=true. Elapsed: 24.157202215s
Jan  1 13:53:46.900: INFO: Pod "pod-subpath-test-projected-wsdm": Phase="Running", Reason="", readiness=true. Elapsed: 26.163804791s
Jan  1 13:53:48.912: INFO: Pod "pod-subpath-test-projected-wsdm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.175546491s
STEP: Saw pod success
Jan  1 13:53:48.912: INFO: Pod "pod-subpath-test-projected-wsdm" satisfied condition "success or failure"
Jan  1 13:53:48.918: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-projected-wsdm container test-container-subpath-projected-wsdm: 
STEP: delete the pod
Jan  1 13:53:49.070: INFO: Waiting for pod pod-subpath-test-projected-wsdm to disappear
Jan  1 13:53:49.084: INFO: Pod pod-subpath-test-projected-wsdm no longer exists
STEP: Deleting pod pod-subpath-test-projected-wsdm
Jan  1 13:53:49.084: INFO: Deleting pod "pod-subpath-test-projected-wsdm" in namespace "subpath-7742"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 13:53:49.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-7742" for this suite.
Jan  1 13:53:55.157: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 13:53:55.355: INFO: namespace subpath-7742 deletion completed in 6.252541927s

• [SLOW TEST:34.834 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with projected pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 13:53:55.356: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Jan  1 13:53:55.492: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 13:54:12.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-3087" for this suite.
Jan  1 13:54:34.305: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 13:54:34.451: INFO: namespace init-container-3087 deletion completed in 22.234759541s

• [SLOW TEST:39.095 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 13:54:34.453: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating replication controller my-hostname-basic-1752e881-fe00-409a-8184-acb591ae9d9d
Jan  1 13:54:34.622: INFO: Pod name my-hostname-basic-1752e881-fe00-409a-8184-acb591ae9d9d: Found 0 pods out of 1
Jan  1 13:54:39.634: INFO: Pod name my-hostname-basic-1752e881-fe00-409a-8184-acb591ae9d9d: Found 1 pods out of 1
Jan  1 13:54:39.635: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-1752e881-fe00-409a-8184-acb591ae9d9d" are running
Jan  1 13:54:43.650: INFO: Pod "my-hostname-basic-1752e881-fe00-409a-8184-acb591ae9d9d-blnm4" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-01 13:54:34 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-01 13:54:34 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-1752e881-fe00-409a-8184-acb591ae9d9d]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-01 13:54:34 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-1752e881-fe00-409a-8184-acb591ae9d9d]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-01 13:54:34 +0000 UTC Reason: Message:}])
Jan  1 13:54:43.650: INFO: Trying to dial the pod
Jan  1 13:54:48.675: INFO: Controller my-hostname-basic-1752e881-fe00-409a-8184-acb591ae9d9d: Got expected result from replica 1 [my-hostname-basic-1752e881-fe00-409a-8184-acb591ae9d9d-blnm4]: "my-hostname-basic-1752e881-fe00-409a-8184-acb591ae9d9d-blnm4", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 13:54:48.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-984" for this suite.
Jan  1 13:54:54.731: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 13:54:54.882: INFO: namespace replication-controller-984 deletion completed in 6.201493385s

• [SLOW TEST:20.430 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 13:54:54.883: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan  1 13:54:54.954: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Jan  1 13:54:55.020: INFO: Pod name sample-pod: Found 0 pods out of 1
Jan  1 13:55:00.028: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jan  1 13:55:04.050: INFO: Creating deployment "test-rolling-update-deployment"
Jan  1 13:55:04.065: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Jan  1 13:55:04.130: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Jan  1 13:55:06.142: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Jan  1 13:55:06.145: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713483704, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713483704, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713483704, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713483704, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  1 13:55:08.150: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713483704, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713483704, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713483704, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713483704, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  1 13:55:10.193: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713483704, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713483704, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713483704, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713483704, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  1 13:55:12.163: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Jan  1 13:55:12.195: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:deployment-2837,SelfLink:/apis/apps/v1/namespaces/deployment-2837/deployments/test-rolling-update-deployment,UID:31eb568b-17fd-49e4-a917-6c66a1da5e3a,ResourceVersion:18900390,Generation:1,CreationTimestamp:2020-01-01 13:55:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-01-01 13:55:04 +0000 UTC 2020-01-01 13:55:04 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-01-01 13:55:11 +0000 UTC 2020-01-01 13:55:04 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-79f6b9d75c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Jan  1 13:55:12.204: INFO: New ReplicaSet "test-rolling-update-deployment-79f6b9d75c" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c,GenerateName:,Namespace:deployment-2837,SelfLink:/apis/apps/v1/namespaces/deployment-2837/replicasets/test-rolling-update-deployment-79f6b9d75c,UID:854e002c-0827-4575-a9c9-d90ed94630b0,ResourceVersion:18900379,Generation:1,CreationTimestamp:2020-01-01 13:55:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 31eb568b-17fd-49e4-a917-6c66a1da5e3a 0xc00072e587 0xc00072e588}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Jan  1 13:55:12.204: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Jan  1 13:55:12.205: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:deployment-2837,SelfLink:/apis/apps/v1/namespaces/deployment-2837/replicasets/test-rolling-update-controller,UID:d723710b-b6d3-455d-b66b-f96c912d70f8,ResourceVersion:18900389,Generation:2,CreationTimestamp:2020-01-01 13:54:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 31eb568b-17fd-49e4-a917-6c66a1da5e3a 0xc00072e49f 0xc00072e4b0}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan  1 13:55:12.213: INFO: Pod "test-rolling-update-deployment-79f6b9d75c-55s6l" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c-55s6l,GenerateName:test-rolling-update-deployment-79f6b9d75c-,Namespace:deployment-2837,SelfLink:/api/v1/namespaces/deployment-2837/pods/test-rolling-update-deployment-79f6b9d75c-55s6l,UID:40ee2492-7d57-4848-83cd-5178290d8de9,ResourceVersion:18900378,Generation:0,CreationTimestamp:2020-01-01 13:55:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-79f6b9d75c 854e002c-0827-4575-a9c9-d90ed94630b0 0xc000b701a7 0xc000b701a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-2rxk5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2rxk5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-2rxk5 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000b70230} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000b70250}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 13:55:04 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 13:55:11 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 13:55:11 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 13:55:04 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2020-01-01 13:55:04 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-01-01 13:55:10 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://a31b292b95e08bb41b0ad77a87399c457c5c991519a20b290dbf35e2944b0d99}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 13:55:12.213: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-2837" for this suite.
Jan  1 13:55:18.295: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 13:55:18.482: INFO: namespace deployment-2837 deletion completed in 6.262377992s

• [SLOW TEST:23.599 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 13:55:18.482: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-projected-all-test-volume-e8eca1f0-bb37-4f24-b9e9-039c65ed8a30
STEP: Creating secret with name secret-projected-all-test-volume-4fa1a889-7638-4e9c-b8db-474651edb54f
STEP: Creating a pod to test Check all projections for projected volume plugin
Jan  1 13:55:18.690: INFO: Waiting up to 5m0s for pod "projected-volume-8f6eaf54-a6ef-4399-8126-178baafabff3" in namespace "projected-5369" to be "success or failure"
Jan  1 13:55:18.778: INFO: Pod "projected-volume-8f6eaf54-a6ef-4399-8126-178baafabff3": Phase="Pending", Reason="", readiness=false. Elapsed: 88.043867ms
Jan  1 13:55:20.795: INFO: Pod "projected-volume-8f6eaf54-a6ef-4399-8126-178baafabff3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.105228271s
Jan  1 13:55:23.584: INFO: Pod "projected-volume-8f6eaf54-a6ef-4399-8126-178baafabff3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.893650435s
Jan  1 13:55:25.596: INFO: Pod "projected-volume-8f6eaf54-a6ef-4399-8126-178baafabff3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.905934059s
Jan  1 13:55:27.606: INFO: Pod "projected-volume-8f6eaf54-a6ef-4399-8126-178baafabff3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.91538804s
STEP: Saw pod success
Jan  1 13:55:27.606: INFO: Pod "projected-volume-8f6eaf54-a6ef-4399-8126-178baafabff3" satisfied condition "success or failure"
Jan  1 13:55:27.611: INFO: Trying to get logs from node iruya-node pod projected-volume-8f6eaf54-a6ef-4399-8126-178baafabff3 container projected-all-volume-test: 
STEP: delete the pod
Jan  1 13:55:27.722: INFO: Waiting for pod projected-volume-8f6eaf54-a6ef-4399-8126-178baafabff3 to disappear
Jan  1 13:55:27.766: INFO: Pod projected-volume-8f6eaf54-a6ef-4399-8126-178baafabff3 no longer exists
[AfterEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 13:55:27.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5369" for this suite.
Jan  1 13:55:33.841: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 13:55:34.031: INFO: namespace projected-5369 deletion completed in 6.258123899s

• [SLOW TEST:15.549 seconds]
[sig-storage] Projected combined
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 13:55:34.034: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-cjfvx in namespace proxy-5878
I0101 13:55:34.252081       8 runners.go:180] Created replication controller with name: proxy-service-cjfvx, namespace: proxy-5878, replica count: 1
I0101 13:55:35.303677       8 runners.go:180] proxy-service-cjfvx Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0101 13:55:36.304161       8 runners.go:180] proxy-service-cjfvx Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0101 13:55:37.304996       8 runners.go:180] proxy-service-cjfvx Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0101 13:55:38.305688       8 runners.go:180] proxy-service-cjfvx Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0101 13:55:39.306299       8 runners.go:180] proxy-service-cjfvx Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0101 13:55:40.306817       8 runners.go:180] proxy-service-cjfvx Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0101 13:55:41.307338       8 runners.go:180] proxy-service-cjfvx Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0101 13:55:42.308381       8 runners.go:180] proxy-service-cjfvx Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0101 13:55:43.309194       8 runners.go:180] proxy-service-cjfvx Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jan  1 13:55:43.317: INFO: setup took 9.194165637s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Jan  1 13:55:43.351: INFO: (0) /api/v1/namespaces/proxy-5878/pods/proxy-service-cjfvx-75ckd:162/proxy/: bar (200; 33.502344ms)
Jan  1 13:55:43.351: INFO: (0) /api/v1/namespaces/proxy-5878/pods/proxy-service-cjfvx-75ckd/proxy/: test (200; 33.713396ms)
Jan  1 13:55:43.351: INFO: (0) /api/v1/namespaces/proxy-5878/services/proxy-service-cjfvx:portname2/proxy/: bar (200; 34.617623ms)
Jan  1 13:55:43.351: INFO: (0) /api/v1/namespaces/proxy-5878/services/http:proxy-service-cjfvx:portname1/proxy/: foo (200; 34.555753ms)
Jan  1 13:55:43.352: INFO: (0) /api/v1/namespaces/proxy-5878/pods/http:proxy-service-cjfvx-75ckd:160/proxy/: foo (200; 34.294413ms)
Jan  1 13:55:43.352: INFO: (0) /api/v1/namespaces/proxy-5878/pods/proxy-service-cjfvx-75ckd:160/proxy/: foo (200; 34.511465ms)
Jan  1 13:55:43.352: INFO: (0) /api/v1/namespaces/proxy-5878/services/proxy-service-cjfvx:portname1/proxy/: foo (200; 34.780733ms)
Jan  1 13:55:43.352: INFO: (0) /api/v1/namespaces/proxy-5878/services/http:proxy-service-cjfvx:portname2/proxy/: bar (200; 34.999107ms)
Jan  1 13:55:43.353: INFO: (0) /api/v1/namespaces/proxy-5878/pods/http:proxy-service-cjfvx-75ckd:162/proxy/: bar (200; 36.531914ms)
Jan  1 13:55:43.354: INFO: (0) /api/v1/namespaces/proxy-5878/pods/http:proxy-service-cjfvx-75ckd:1080/proxy/: ... (200; 36.602253ms)
Jan  1 13:55:43.354: INFO: (0) /api/v1/namespaces/proxy-5878/pods/proxy-service-cjfvx-75ckd:1080/proxy/: test<... (200; 36.981086ms)
Jan  1 13:55:43.369: INFO: (0) /api/v1/namespaces/proxy-5878/services/https:proxy-service-cjfvx:tlsportname1/proxy/: tls baz (200; 51.089844ms)
Jan  1 13:55:43.369: INFO: (0) /api/v1/namespaces/proxy-5878/pods/https:proxy-service-cjfvx-75ckd:462/proxy/: tls qux (200; 51.276461ms)
Jan  1 13:55:43.369: INFO: (0) /api/v1/namespaces/proxy-5878/services/https:proxy-service-cjfvx:tlsportname2/proxy/: tls qux (200; 51.746003ms)
Jan  1 13:55:43.370: INFO: (0) /api/v1/namespaces/proxy-5878/pods/https:proxy-service-cjfvx-75ckd:443/proxy/: test<... (200; 12.283416ms)
Jan  1 13:55:43.390: INFO: (1) /api/v1/namespaces/proxy-5878/services/http:proxy-service-cjfvx:portname1/proxy/: foo (200; 19.855721ms)
Jan  1 13:55:43.390: INFO: (1) /api/v1/namespaces/proxy-5878/pods/https:proxy-service-cjfvx-75ckd:460/proxy/: tls baz (200; 20.001838ms)
Jan  1 13:55:43.390: INFO: (1) /api/v1/namespaces/proxy-5878/pods/proxy-service-cjfvx-75ckd:162/proxy/: bar (200; 19.830401ms)
Jan  1 13:55:43.391: INFO: (1) /api/v1/namespaces/proxy-5878/pods/proxy-service-cjfvx-75ckd/proxy/: test (200; 20.239412ms)
Jan  1 13:55:43.391: INFO: (1) /api/v1/namespaces/proxy-5878/pods/proxy-service-cjfvx-75ckd:160/proxy/: foo (200; 20.179398ms)
Jan  1 13:55:43.391: INFO: (1) /api/v1/namespaces/proxy-5878/pods/https:proxy-service-cjfvx-75ckd:443/proxy/: ... (200; 20.024146ms)
Jan  1 13:55:43.392: INFO: (1) /api/v1/namespaces/proxy-5878/pods/https:proxy-service-cjfvx-75ckd:462/proxy/: tls qux (200; 20.919332ms)
Jan  1 13:55:43.392: INFO: (1) /api/v1/namespaces/proxy-5878/services/proxy-service-cjfvx:portname1/proxy/: foo (200; 21.213159ms)
Jan  1 13:55:43.392: INFO: (1) /api/v1/namespaces/proxy-5878/services/proxy-service-cjfvx:portname2/proxy/: bar (200; 21.473096ms)
Jan  1 13:55:43.392: INFO: (1) /api/v1/namespaces/proxy-5878/pods/http:proxy-service-cjfvx-75ckd:160/proxy/: foo (200; 21.417414ms)
Jan  1 13:55:43.392: INFO: (1) /api/v1/namespaces/proxy-5878/services/https:proxy-service-cjfvx:tlsportname1/proxy/: tls baz (200; 21.693017ms)
Jan  1 13:55:43.392: INFO: (1) /api/v1/namespaces/proxy-5878/services/https:proxy-service-cjfvx:tlsportname2/proxy/: tls qux (200; 21.622504ms)
Jan  1 13:55:43.393: INFO: (1) /api/v1/namespaces/proxy-5878/services/http:proxy-service-cjfvx:portname2/proxy/: bar (200; 22.550744ms)
Jan  1 13:55:43.393: INFO: (1) /api/v1/namespaces/proxy-5878/pods/http:proxy-service-cjfvx-75ckd:162/proxy/: bar (200; 22.881798ms)
Jan  1 13:55:43.404: INFO: (2) /api/v1/namespaces/proxy-5878/pods/http:proxy-service-cjfvx-75ckd:162/proxy/: bar (200; 10.615587ms)
Jan  1 13:55:43.405: INFO: (2) /api/v1/namespaces/proxy-5878/pods/proxy-service-cjfvx-75ckd:162/proxy/: bar (200; 11.146229ms)
Jan  1 13:55:43.405: INFO: (2) /api/v1/namespaces/proxy-5878/pods/http:proxy-service-cjfvx-75ckd:160/proxy/: foo (200; 11.171552ms)
Jan  1 13:55:43.406: INFO: (2) /api/v1/namespaces/proxy-5878/pods/proxy-service-cjfvx-75ckd:1080/proxy/: test<... (200; 11.796793ms)
Jan  1 13:55:43.406: INFO: (2) /api/v1/namespaces/proxy-5878/pods/proxy-service-cjfvx-75ckd:160/proxy/: foo (200; 12.622277ms)
Jan  1 13:55:43.406: INFO: (2) /api/v1/namespaces/proxy-5878/pods/proxy-service-cjfvx-75ckd/proxy/: test (200; 12.567464ms)
Jan  1 13:55:43.406: INFO: (2) /api/v1/namespaces/proxy-5878/pods/http:proxy-service-cjfvx-75ckd:1080/proxy/: ... (200; 12.599928ms)
Jan  1 13:55:43.406: INFO: (2) /api/v1/namespaces/proxy-5878/pods/https:proxy-service-cjfvx-75ckd:462/proxy/: tls qux (200; 12.891431ms)
Jan  1 13:55:43.407: INFO: (2) /api/v1/namespaces/proxy-5878/pods/https:proxy-service-cjfvx-75ckd:460/proxy/: tls baz (200; 13.166991ms)
Jan  1 13:55:43.407: INFO: (2) /api/v1/namespaces/proxy-5878/pods/https:proxy-service-cjfvx-75ckd:443/proxy/: test<... (200; 20.938629ms)
Jan  1 13:55:43.437: INFO: (3) /api/v1/namespaces/proxy-5878/pods/http:proxy-service-cjfvx-75ckd:160/proxy/: foo (200; 21.536062ms)
Jan  1 13:55:43.437: INFO: (3) /api/v1/namespaces/proxy-5878/pods/http:proxy-service-cjfvx-75ckd:162/proxy/: bar (200; 21.413534ms)
Jan  1 13:55:43.438: INFO: (3) /api/v1/namespaces/proxy-5878/pods/proxy-service-cjfvx-75ckd:160/proxy/: foo (200; 21.960363ms)
Jan  1 13:55:43.438: INFO: (3) /api/v1/namespaces/proxy-5878/pods/proxy-service-cjfvx-75ckd/proxy/: test (200; 21.634743ms)
Jan  1 13:55:43.439: INFO: (3) /api/v1/namespaces/proxy-5878/pods/http:proxy-service-cjfvx-75ckd:1080/proxy/: ... (200; 22.590744ms)
Jan  1 13:55:43.439: INFO: (3) /api/v1/namespaces/proxy-5878/pods/https:proxy-service-cjfvx-75ckd:460/proxy/: tls baz (200; 22.935523ms)
Jan  1 13:55:43.439: INFO: (3) /api/v1/namespaces/proxy-5878/pods/proxy-service-cjfvx-75ckd:162/proxy/: bar (200; 23.665668ms)
Jan  1 13:55:43.444: INFO: (3) /api/v1/namespaces/proxy-5878/services/proxy-service-cjfvx:portname1/proxy/: foo (200; 28.136993ms)
Jan  1 13:55:43.444: INFO: (3) /api/v1/namespaces/proxy-5878/services/http:proxy-service-cjfvx:portname1/proxy/: foo (200; 27.855112ms)
Jan  1 13:55:43.444: INFO: (3) /api/v1/namespaces/proxy-5878/services/https:proxy-service-cjfvx:tlsportname2/proxy/: tls qux (200; 28.59975ms)
Jan  1 13:55:43.444: INFO: (3) /api/v1/namespaces/proxy-5878/services/http:proxy-service-cjfvx:portname2/proxy/: bar (200; 28.693481ms)
Jan  1 13:55:43.445: INFO: (3) /api/v1/namespaces/proxy-5878/services/https:proxy-service-cjfvx:tlsportname1/proxy/: tls baz (200; 28.530108ms)
Jan  1 13:55:43.445: INFO: (3) /api/v1/namespaces/proxy-5878/services/proxy-service-cjfvx:portname2/proxy/: bar (200; 28.91401ms)
Jan  1 13:55:43.451: INFO: (4) /api/v1/namespaces/proxy-5878/pods/proxy-service-cjfvx-75ckd:1080/proxy/: test<... (200; 6.437744ms)
Jan  1 13:55:43.452: INFO: (4) /api/v1/namespaces/proxy-5878/pods/http:proxy-service-cjfvx-75ckd:1080/proxy/: ... (200; 6.829689ms)
Jan  1 13:55:43.457: INFO: (4) /api/v1/namespaces/proxy-5878/pods/https:proxy-service-cjfvx-75ckd:460/proxy/: tls baz (200; 11.815488ms)
Jan  1 13:55:43.457: INFO: (4) /api/v1/namespaces/proxy-5878/pods/proxy-service-cjfvx-75ckd:162/proxy/: bar (200; 11.960002ms)
Jan  1 13:55:43.457: INFO: (4) /api/v1/namespaces/proxy-5878/pods/http:proxy-service-cjfvx-75ckd:162/proxy/: bar (200; 12.564815ms)
Jan  1 13:55:43.458: INFO: (4) /api/v1/namespaces/proxy-5878/pods/proxy-service-cjfvx-75ckd:160/proxy/: foo (200; 13.19773ms)
Jan  1 13:55:43.458: INFO: (4) /api/v1/namespaces/proxy-5878/pods/proxy-service-cjfvx-75ckd/proxy/: test (200; 13.701619ms)
Jan  1 13:55:43.459: INFO: (4) /api/v1/namespaces/proxy-5878/pods/https:proxy-service-cjfvx-75ckd:462/proxy/: tls qux (200; 14.25921ms)
Jan  1 13:55:43.461: INFO: (4) /api/v1/namespaces/proxy-5878/pods/http:proxy-service-cjfvx-75ckd:160/proxy/: foo (200; 16.229536ms)
Jan  1 13:55:43.461: INFO: (4) /api/v1/namespaces/proxy-5878/pods/https:proxy-service-cjfvx-75ckd:443/proxy/: ... (200; 10.783814ms)
Jan  1 13:55:43.484: INFO: (5) /api/v1/namespaces/proxy-5878/pods/proxy-service-cjfvx-75ckd:1080/proxy/: test<... (200; 11.45019ms)
Jan  1 13:55:43.485: INFO: (5) /api/v1/namespaces/proxy-5878/pods/proxy-service-cjfvx-75ckd:162/proxy/: bar (200; 11.635255ms)
Jan  1 13:55:43.485: INFO: (5) /api/v1/namespaces/proxy-5878/pods/http:proxy-service-cjfvx-75ckd:160/proxy/: foo (200; 11.825591ms)
Jan  1 13:55:43.485: INFO: (5) /api/v1/namespaces/proxy-5878/pods/proxy-service-cjfvx-75ckd:160/proxy/: foo (200; 12.586624ms)
Jan  1 13:55:43.487: INFO: (5) /api/v1/namespaces/proxy-5878/pods/https:proxy-service-cjfvx-75ckd:462/proxy/: tls qux (200; 13.797391ms)
Jan  1 13:55:43.487: INFO: (5) /api/v1/namespaces/proxy-5878/pods/http:proxy-service-cjfvx-75ckd:162/proxy/: bar (200; 14.717636ms)
Jan  1 13:55:43.489: INFO: (5) /api/v1/namespaces/proxy-5878/pods/proxy-service-cjfvx-75ckd/proxy/: test (200; 15.799457ms)
Jan  1 13:55:43.489: INFO: (5) /api/v1/namespaces/proxy-5878/pods/https:proxy-service-cjfvx-75ckd:443/proxy/: ... (200; 9.926087ms)
Jan  1 13:55:43.503: INFO: (6) /api/v1/namespaces/proxy-5878/pods/https:proxy-service-cjfvx-75ckd:462/proxy/: tls qux (200; 10.038959ms)
Jan  1 13:55:43.503: INFO: (6) /api/v1/namespaces/proxy-5878/pods/proxy-service-cjfvx-75ckd:162/proxy/: bar (200; 10.307869ms)
Jan  1 13:55:43.503: INFO: (6) /api/v1/namespaces/proxy-5878/pods/proxy-service-cjfvx-75ckd:1080/proxy/: test<... (200; 10.081156ms)
Jan  1 13:55:43.504: INFO: (6) /api/v1/namespaces/proxy-5878/pods/http:proxy-service-cjfvx-75ckd:162/proxy/: bar (200; 10.557047ms)
Jan  1 13:55:43.505: INFO: (6) /api/v1/namespaces/proxy-5878/pods/proxy-service-cjfvx-75ckd:160/proxy/: foo (200; 11.99875ms)
Jan  1 13:55:43.505: INFO: (6) /api/v1/namespaces/proxy-5878/pods/proxy-service-cjfvx-75ckd/proxy/: test (200; 11.644209ms)
Jan  1 13:55:43.506: INFO: (6) /api/v1/namespaces/proxy-5878/pods/http:proxy-service-cjfvx-75ckd:160/proxy/: foo (200; 12.489403ms)
Jan  1 13:55:43.506: INFO: (6) /api/v1/namespaces/proxy-5878/pods/https:proxy-service-cjfvx-75ckd:443/proxy/: test<... (200; 5.478728ms)
Jan  1 13:55:43.518: INFO: (7) /api/v1/namespaces/proxy-5878/pods/http:proxy-service-cjfvx-75ckd:1080/proxy/: ... (200; 7.511135ms)
Jan  1 13:55:43.519: INFO: (7) /api/v1/namespaces/proxy-5878/pods/http:proxy-service-cjfvx-75ckd:162/proxy/: bar (200; 8.018102ms)
Jan  1 13:55:43.519: INFO: (7) /api/v1/namespaces/proxy-5878/pods/https:proxy-service-cjfvx-75ckd:462/proxy/: tls qux (200; 8.038728ms)
Jan  1 13:55:43.520: INFO: (7) /api/v1/namespaces/proxy-5878/pods/https:proxy-service-cjfvx-75ckd:460/proxy/: tls baz (200; 8.653819ms)
Jan  1 13:55:43.520: INFO: (7) /api/v1/namespaces/proxy-5878/pods/http:proxy-service-cjfvx-75ckd:160/proxy/: foo (200; 9.254787ms)
Jan  1 13:55:43.521: INFO: (7) /api/v1/namespaces/proxy-5878/services/proxy-service-cjfvx:portname1/proxy/: foo (200; 10.340681ms)
Jan  1 13:55:43.526: INFO: (7) /api/v1/namespaces/proxy-5878/services/http:proxy-service-cjfvx:portname2/proxy/: bar (200; 14.746404ms)
Jan  1 13:55:43.526: INFO: (7) /api/v1/namespaces/proxy-5878/pods/proxy-service-cjfvx-75ckd:160/proxy/: foo (200; 14.786283ms)
Jan  1 13:55:43.526: INFO: (7) /api/v1/namespaces/proxy-5878/services/https:proxy-service-cjfvx:tlsportname1/proxy/: tls baz (200; 14.969481ms)
Jan  1 13:55:43.531: INFO: (7) /api/v1/namespaces/proxy-5878/services/proxy-service-cjfvx:portname2/proxy/: bar (200; 19.442315ms)
Jan  1 13:55:43.531: INFO: (7) /api/v1/namespaces/proxy-5878/services/http:proxy-service-cjfvx:portname1/proxy/: foo (200; 20.041777ms)
Jan  1 13:55:43.531: INFO: (7) /api/v1/namespaces/proxy-5878/pods/proxy-service-cjfvx-75ckd/proxy/: test (200; 20.37635ms)
Jan  1 13:55:43.533: INFO: (7) /api/v1/namespaces/proxy-5878/services/https:proxy-service-cjfvx:tlsportname2/proxy/: tls qux (200; 21.600737ms)
Jan  1 13:55:43.546: INFO: (8) /api/v1/namespaces/proxy-5878/pods/proxy-service-cjfvx-75ckd/proxy/: test (200; 13.062967ms)
Jan  1 13:55:43.547: INFO: (8) /api/v1/namespaces/proxy-5878/pods/http:proxy-service-cjfvx-75ckd:1080/proxy/: ... (200; 13.420796ms)
Jan  1 13:55:43.548: INFO: (8) /api/v1/namespaces/proxy-5878/pods/https:proxy-service-cjfvx-75ckd:443/proxy/: test<... (200; 20.936155ms)
Jan  1 13:55:43.564: INFO: (9) /api/v1/namespaces/proxy-5878/pods/http:proxy-service-cjfvx-75ckd:1080/proxy/: ... (200; 9.161591ms)
Jan  1 13:55:43.564: INFO: (9) /api/v1/namespaces/proxy-5878/pods/https:proxy-service-cjfvx-75ckd:460/proxy/: tls baz (200; 8.840474ms)
Jan  1 13:55:43.564: INFO: (9) /api/v1/namespaces/proxy-5878/pods/http:proxy-service-cjfvx-75ckd:162/proxy/: bar (200; 9.566616ms)
Jan  1 13:55:43.564: INFO: (9) /api/v1/namespaces/proxy-5878/pods/http:proxy-service-cjfvx-75ckd:160/proxy/: foo (200; 9.644255ms)
Jan  1 13:55:43.565: INFO: (9) /api/v1/namespaces/proxy-5878/pods/proxy-service-cjfvx-75ckd:160/proxy/: foo (200; 10.642656ms)
Jan  1 13:55:43.566: INFO: (9) /api/v1/namespaces/proxy-5878/pods/https:proxy-service-cjfvx-75ckd:443/proxy/: test<... (200; 11.151947ms)
Jan  1 13:55:43.567: INFO: (9) /api/v1/namespaces/proxy-5878/pods/proxy-service-cjfvx-75ckd/proxy/: test (200; 12.066627ms)
Jan  1 13:55:43.567: INFO: (9) /api/v1/namespaces/proxy-5878/pods/https:proxy-service-cjfvx-75ckd:462/proxy/: tls qux (200; 11.725417ms)
Jan  1 13:55:43.567: INFO: (9) /api/v1/namespaces/proxy-5878/services/http:proxy-service-cjfvx:portname2/proxy/: bar (200; 12.484618ms)
Jan  1 13:55:43.567: INFO: (9) /api/v1/namespaces/proxy-5878/services/https:proxy-service-cjfvx:tlsportname2/proxy/: tls qux (200; 12.497559ms)
Jan  1 13:55:43.568: INFO: (9) /api/v1/namespaces/proxy-5878/services/http:proxy-service-cjfvx:portname1/proxy/: foo (200; 12.795541ms)
Jan  1 13:55:43.568: INFO: (9) /api/v1/namespaces/proxy-5878/services/proxy-service-cjfvx:portname2/proxy/: bar (200; 12.830479ms)
Jan  1 13:55:43.568: INFO: (9) /api/v1/namespaces/proxy-5878/services/proxy-service-cjfvx:portname1/proxy/: foo (200; 13.449728ms)
Jan  1 13:55:43.568: INFO: (9) /api/v1/namespaces/proxy-5878/services/https:proxy-service-cjfvx:tlsportname1/proxy/: tls baz (200; 13.502289ms)
Jan  1 13:55:43.576: INFO: (10) /api/v1/namespaces/proxy-5878/pods/proxy-service-cjfvx-75ckd:1080/proxy/: test<... (200; 7.934733ms)
Jan  1 13:55:43.577: INFO: (10) /api/v1/namespaces/proxy-5878/pods/https:proxy-service-cjfvx-75ckd:460/proxy/: tls baz (200; 8.188694ms)
Jan  1 13:55:43.577: INFO: (10) /api/v1/namespaces/proxy-5878/pods/http:proxy-service-cjfvx-75ckd:162/proxy/: bar (200; 8.978669ms)
Jan  1 13:55:43.578: INFO: (10) /api/v1/namespaces/proxy-5878/pods/proxy-service-cjfvx-75ckd/proxy/: test (200; 9.955857ms)
Jan  1 13:55:43.580: INFO: (10) /api/v1/namespaces/proxy-5878/services/http:proxy-service-cjfvx:portname1/proxy/: foo (200; 11.683813ms)
Jan  1 13:55:43.582: INFO: (10) /api/v1/namespaces/proxy-5878/pods/https:proxy-service-cjfvx-75ckd:462/proxy/: tls qux (200; 12.934368ms)
Jan  1 13:55:43.582: INFO: (10) /api/v1/namespaces/proxy-5878/services/https:proxy-service-cjfvx:tlsportname1/proxy/: tls baz (200; 13.205408ms)
Jan  1 13:55:43.582: INFO: (10) /api/v1/namespaces/proxy-5878/pods/proxy-service-cjfvx-75ckd:162/proxy/: bar (200; 13.578658ms)
Jan  1 13:55:43.583: INFO: (10) /api/v1/namespaces/proxy-5878/services/proxy-service-cjfvx:portname1/proxy/: foo (200; 14.042546ms)
Jan  1 13:55:43.583: INFO: (10) /api/v1/namespaces/proxy-5878/pods/https:proxy-service-cjfvx-75ckd:443/proxy/: ... (200; 14.942046ms)
Jan  1 13:55:43.584: INFO: (10) /api/v1/namespaces/proxy-5878/pods/proxy-service-cjfvx-75ckd:160/proxy/: foo (200; 15.3713ms)
Jan  1 13:55:43.584: INFO: (10) /api/v1/namespaces/proxy-5878/services/proxy-service-cjfvx:portname2/proxy/: bar (200; 15.233983ms)
Jan  1 13:55:43.585: INFO: (10) /api/v1/namespaces/proxy-5878/services/http:proxy-service-cjfvx:portname2/proxy/: bar (200; 15.689811ms)
Jan  1 13:55:43.586: INFO: (10) /api/v1/namespaces/proxy-5878/pods/http:proxy-service-cjfvx-75ckd:160/proxy/: foo (200; 16.693897ms)
Jan  1 13:55:43.595: INFO: (11) /api/v1/namespaces/proxy-5878/pods/http:proxy-service-cjfvx-75ckd:1080/proxy/: ... (200; 9.098241ms)
Jan  1 13:55:43.595: INFO: (11) /api/v1/namespaces/proxy-5878/pods/http:proxy-service-cjfvx-75ckd:162/proxy/: bar (200; 9.161583ms)
Jan  1 13:55:43.597: INFO: (11) /api/v1/namespaces/proxy-5878/pods/proxy-service-cjfvx-75ckd:1080/proxy/: test<... (200; 11.564849ms)
Jan  1 13:55:43.598: INFO: (11) /api/v1/namespaces/proxy-5878/pods/proxy-service-cjfvx-75ckd/proxy/: test (200; 11.598882ms)
Jan  1 13:55:43.599: INFO: (11) /api/v1/namespaces/proxy-5878/pods/proxy-service-cjfvx-75ckd:162/proxy/: bar (200; 13.032538ms)
Jan  1 13:55:43.599: INFO: (11) /api/v1/namespaces/proxy-5878/services/proxy-service-cjfvx:portname2/proxy/: bar (200; 13.35516ms)
Jan  1 13:55:43.599: INFO: (11) /api/v1/namespaces/proxy-5878/pods/https:proxy-service-cjfvx-75ckd:460/proxy/: tls baz (200; 13.405708ms)
Jan  1 13:55:43.600: INFO: (11) /api/v1/namespaces/proxy-5878/pods/proxy-service-cjfvx-75ckd:160/proxy/: foo (200; 13.425537ms)
Jan  1 13:55:43.602: INFO: (11) /api/v1/namespaces/proxy-5878/pods/http:proxy-service-cjfvx-75ckd:160/proxy/: foo (200; 15.594289ms)
Jan  1 13:55:43.602: INFO: (11) /api/v1/namespaces/proxy-5878/pods/https:proxy-service-cjfvx-75ckd:462/proxy/: tls qux (200; 15.755848ms)
Jan  1 13:55:43.603: INFO: (11) /api/v1/namespaces/proxy-5878/services/http:proxy-service-cjfvx:portname2/proxy/: bar (200; 16.670025ms)
Jan  1 13:55:43.603: INFO: (11) /api/v1/namespaces/proxy-5878/pods/https:proxy-service-cjfvx-75ckd:443/proxy/: ... (200; 20.614949ms)
Jan  1 13:55:43.630: INFO: (12) /api/v1/namespaces/proxy-5878/pods/proxy-service-cjfvx-75ckd:160/proxy/: foo (200; 19.300961ms)
Jan  1 13:55:43.632: INFO: (12) /api/v1/namespaces/proxy-5878/pods/proxy-service-cjfvx-75ckd/proxy/: test (200; 22.394746ms)
Jan  1 13:55:43.633: INFO: (12) /api/v1/namespaces/proxy-5878/services/proxy-service-cjfvx:portname2/proxy/: bar (200; 22.511461ms)
Jan  1 13:55:43.636: INFO: (12) /api/v1/namespaces/proxy-5878/pods/proxy-service-cjfvx-75ckd:1080/proxy/: test<... (200; 26.392688ms)
Jan  1 13:55:43.636: INFO: (12) /api/v1/namespaces/proxy-5878/services/https:proxy-service-cjfvx:tlsportname1/proxy/: tls baz (200; 26.67127ms)
Jan  1 13:55:43.637: INFO: (12) /api/v1/namespaces/proxy-5878/services/http:proxy-service-cjfvx:portname1/proxy/: foo (200; 27.427831ms)
Jan  1 13:55:43.637: INFO: (12) /api/v1/namespaces/proxy-5878/pods/https:proxy-service-cjfvx-75ckd:460/proxy/: tls baz (200; 26.990936ms)
Jan  1 13:55:43.637: INFO: (12) /api/v1/namespaces/proxy-5878/pods/http:proxy-service-cjfvx-75ckd:162/proxy/: bar (200; 28.073342ms)
Jan  1 13:55:43.637: INFO: (12) /api/v1/namespaces/proxy-5878/pods/https:proxy-service-cjfvx-75ckd:462/proxy/: tls qux (200; 26.990205ms)
Jan  1 13:55:43.639: INFO: (12) /api/v1/namespaces/proxy-5878/pods/http:proxy-service-cjfvx-75ckd:160/proxy/: foo (200; 28.192595ms)
Jan  1 13:55:43.639: INFO: (12) /api/v1/namespaces/proxy-5878/services/https:proxy-service-cjfvx:tlsportname2/proxy/: tls qux (200; 28.25831ms)
Jan  1 13:55:43.639: INFO: (12) /api/v1/namespaces/proxy-5878/pods/https:proxy-service-cjfvx-75ckd:443/proxy/: ... (200; 11.398684ms)
Jan  1 13:55:43.652: INFO: (13) /api/v1/namespaces/proxy-5878/pods/proxy-service-cjfvx-75ckd:160/proxy/: foo (200; 12.520837ms)
Jan  1 13:55:43.660: INFO: (13) /api/v1/namespaces/proxy-5878/pods/http:proxy-service-cjfvx-75ckd:160/proxy/: foo (200; 20.076921ms)
Jan  1 13:55:43.660: INFO: (13) /api/v1/namespaces/proxy-5878/services/proxy-service-cjfvx:portname2/proxy/: bar (200; 20.766492ms)
Jan  1 13:55:43.660: INFO: (13) /api/v1/namespaces/proxy-5878/pods/proxy-service-cjfvx-75ckd/proxy/: test (200; 20.419232ms)
Jan  1 13:55:43.662: INFO: (13) /api/v1/namespaces/proxy-5878/pods/proxy-service-cjfvx-75ckd:1080/proxy/: test<... (200; 22.357402ms)
Jan  1 13:55:43.663: INFO: (13) /api/v1/namespaces/proxy-5878/services/http:proxy-service-cjfvx:portname1/proxy/: foo (200; 23.538139ms)
Jan  1 13:55:43.663: INFO: (13) /api/v1/namespaces/proxy-5878/pods/proxy-service-cjfvx-75ckd:162/proxy/: bar (200; 23.415968ms)
Jan  1 13:55:43.663: INFO: (13) /api/v1/namespaces/proxy-5878/pods/http:proxy-service-cjfvx-75ckd:162/proxy/: bar (200; 23.395829ms)
Jan  1 13:55:43.663: INFO: (13) /api/v1/namespaces/proxy-5878/pods/https:proxy-service-cjfvx-75ckd:462/proxy/: tls qux (200; 23.178154ms)
Jan  1 13:55:43.664: INFO: (13) /api/v1/namespaces/proxy-5878/services/https:proxy-service-cjfvx:tlsportname2/proxy/: tls qux (200; 23.744893ms)
Jan  1 13:55:43.664: INFO: (13) /api/v1/namespaces/proxy-5878/services/https:proxy-service-cjfvx:tlsportname1/proxy/: tls baz (200; 24.947943ms)
Jan  1 13:55:43.664: INFO: (13) /api/v1/namespaces/proxy-5878/pods/https:proxy-service-cjfvx-75ckd:460/proxy/: tls baz (200; 24.647007ms)
Jan  1 13:55:43.664: INFO: (13) /api/v1/namespaces/proxy-5878/services/proxy-service-cjfvx:portname1/proxy/: foo (200; 24.562215ms)
Jan  1 13:55:43.665: INFO: (13) /api/v1/namespaces/proxy-5878/services/http:proxy-service-cjfvx:portname2/proxy/: bar (200; 25.227299ms)
Jan  1 13:55:43.676: INFO: (14) /api/v1/namespaces/proxy-5878/pods/proxy-service-cjfvx-75ckd:160/proxy/: foo (200; 11.281939ms)
Jan  1 13:55:43.677: INFO: (14) /api/v1/namespaces/proxy-5878/pods/http:proxy-service-cjfvx-75ckd:162/proxy/: bar (200; 10.603402ms)
Jan  1 13:55:43.677: INFO: (14) /api/v1/namespaces/proxy-5878/pods/proxy-service-cjfvx-75ckd:1080/proxy/: test<... (200; 10.211162ms)
Jan  1 13:55:43.678: INFO: (14) /api/v1/namespaces/proxy-5878/pods/https:proxy-service-cjfvx-75ckd:460/proxy/: tls baz (200; 11.44005ms)
Jan  1 13:55:43.678: INFO: (14) /api/v1/namespaces/proxy-5878/pods/http:proxy-service-cjfvx-75ckd:160/proxy/: foo (200; 12.147503ms)
Jan  1 13:55:43.678: INFO: (14) /api/v1/namespaces/proxy-5878/services/http:proxy-service-cjfvx:portname2/proxy/: bar (200; 12.402305ms)
Jan  1 13:55:43.678: INFO: (14) /api/v1/namespaces/proxy-5878/pods/http:proxy-service-cjfvx-75ckd:1080/proxy/: ... (200; 12.025289ms)
Jan  1 13:55:43.678: INFO: (14) /api/v1/namespaces/proxy-5878/pods/https:proxy-service-cjfvx-75ckd:443/proxy/: test (200; 12.297064ms)
Jan  1 13:55:43.679: INFO: (14) /api/v1/namespaces/proxy-5878/pods/https:proxy-service-cjfvx-75ckd:462/proxy/: tls qux (200; 12.136698ms)
Jan  1 13:55:43.679: INFO: (14) /api/v1/namespaces/proxy-5878/services/http:proxy-service-cjfvx:portname1/proxy/: foo (200; 12.364381ms)
Jan  1 13:55:43.680: INFO: (14) /api/v1/namespaces/proxy-5878/services/https:proxy-service-cjfvx:tlsportname2/proxy/: tls qux (200; 13.005018ms)
Jan  1 13:55:43.680: INFO: (14) /api/v1/namespaces/proxy-5878/services/proxy-service-cjfvx:portname2/proxy/: bar (200; 13.142119ms)
Jan  1 13:55:43.680: INFO: (14) /api/v1/namespaces/proxy-5878/services/https:proxy-service-cjfvx:tlsportname1/proxy/: tls baz (200; 13.806825ms)
Jan  1 13:55:43.680: INFO: (14) /api/v1/namespaces/proxy-5878/services/proxy-service-cjfvx:portname1/proxy/: foo (200; 13.460423ms)
Jan  1 13:55:43.687: INFO: (15) /api/v1/namespaces/proxy-5878/pods/https:proxy-service-cjfvx-75ckd:462/proxy/: tls qux (200; 6.689599ms)
Jan  1 13:55:43.688: INFO: (15) /api/v1/namespaces/proxy-5878/pods/proxy-service-cjfvx-75ckd:162/proxy/: bar (200; 7.255366ms)
Jan  1 13:55:43.688: INFO: (15) /api/v1/namespaces/proxy-5878/pods/https:proxy-service-cjfvx-75ckd:443/proxy/: test<... (200; 7.109264ms)
Jan  1 13:55:43.688: INFO: (15) /api/v1/namespaces/proxy-5878/pods/proxy-service-cjfvx-75ckd:160/proxy/: foo (200; 7.268601ms)
Jan  1 13:55:43.688: INFO: (15) /api/v1/namespaces/proxy-5878/pods/http:proxy-service-cjfvx-75ckd:160/proxy/: foo (200; 7.507862ms)
Jan  1 13:55:43.688: INFO: (15) /api/v1/namespaces/proxy-5878/pods/http:proxy-service-cjfvx-75ckd:1080/proxy/: ... (200; 7.487971ms)
Jan  1 13:55:43.688: INFO: (15) /api/v1/namespaces/proxy-5878/pods/proxy-service-cjfvx-75ckd/proxy/: test (200; 7.760695ms)
Jan  1 13:55:43.689: INFO: (15) /api/v1/namespaces/proxy-5878/pods/https:proxy-service-cjfvx-75ckd:460/proxy/: tls baz (200; 7.795201ms)
Jan  1 13:55:43.689: INFO: (15) /api/v1/namespaces/proxy-5878/pods/http:proxy-service-cjfvx-75ckd:162/proxy/: bar (200; 8.431154ms)
Jan  1 13:55:43.691: INFO: (15) /api/v1/namespaces/proxy-5878/services/proxy-service-cjfvx:portname1/proxy/: foo (200; 11.032079ms)
Jan  1 13:55:43.692: INFO: (15) /api/v1/namespaces/proxy-5878/services/https:proxy-service-cjfvx:tlsportname2/proxy/: tls qux (200; 11.096242ms)
Jan  1 13:55:43.692: INFO: (15) /api/v1/namespaces/proxy-5878/services/proxy-service-cjfvx:portname2/proxy/: bar (200; 11.293688ms)
Jan  1 13:55:43.692: INFO: (15) /api/v1/namespaces/proxy-5878/services/https:proxy-service-cjfvx:tlsportname1/proxy/: tls baz (200; 11.678839ms)
Jan  1 13:55:43.696: INFO: (15) /api/v1/namespaces/proxy-5878/services/http:proxy-service-cjfvx:portname2/proxy/: bar (200; 15.733157ms)
Jan  1 13:55:43.697: INFO: (15) /api/v1/namespaces/proxy-5878/services/http:proxy-service-cjfvx:portname1/proxy/: foo (200; 16.124788ms)
Jan  1 13:55:43.709: INFO: (16) /api/v1/namespaces/proxy-5878/pods/https:proxy-service-cjfvx-75ckd:443/proxy/: test<... (200; 11.210607ms)
Jan  1 13:55:43.709: INFO: (16) /api/v1/namespaces/proxy-5878/pods/https:proxy-service-cjfvx-75ckd:462/proxy/: tls qux (200; 11.981584ms)
Jan  1 13:55:43.710: INFO: (16) /api/v1/namespaces/proxy-5878/pods/proxy-service-cjfvx-75ckd/proxy/: test (200; 12.103818ms)
Jan  1 13:55:43.710: INFO: (16) /api/v1/namespaces/proxy-5878/pods/proxy-service-cjfvx-75ckd:160/proxy/: foo (200; 12.732237ms)
Jan  1 13:55:43.711: INFO: (16) /api/v1/namespaces/proxy-5878/pods/http:proxy-service-cjfvx-75ckd:162/proxy/: bar (200; 13.379836ms)
Jan  1 13:55:43.711: INFO: (16) /api/v1/namespaces/proxy-5878/pods/http:proxy-service-cjfvx-75ckd:1080/proxy/: ... (200; 13.676908ms)
Jan  1 13:55:43.711: INFO: (16) /api/v1/namespaces/proxy-5878/pods/https:proxy-service-cjfvx-75ckd:460/proxy/: tls baz (200; 13.433158ms)
Jan  1 13:55:43.711: INFO: (16) /api/v1/namespaces/proxy-5878/pods/http:proxy-service-cjfvx-75ckd:160/proxy/: foo (200; 14.15338ms)
Jan  1 13:55:43.713: INFO: (16) /api/v1/namespaces/proxy-5878/services/http:proxy-service-cjfvx:portname2/proxy/: bar (200; 16.171093ms)
Jan  1 13:55:43.713: INFO: (16) /api/v1/namespaces/proxy-5878/services/proxy-service-cjfvx:portname2/proxy/: bar (200; 15.426247ms)
Jan  1 13:55:43.715: INFO: (16) /api/v1/namespaces/proxy-5878/services/http:proxy-service-cjfvx:portname1/proxy/: foo (200; 17.453842ms)
Jan  1 13:55:43.715: INFO: (16) /api/v1/namespaces/proxy-5878/services/https:proxy-service-cjfvx:tlsportname1/proxy/: tls baz (200; 17.214966ms)
Jan  1 13:55:43.715: INFO: (16) /api/v1/namespaces/proxy-5878/services/https:proxy-service-cjfvx:tlsportname2/proxy/: tls qux (200; 17.178016ms)
Jan  1 13:55:43.715: INFO: (16) /api/v1/namespaces/proxy-5878/services/proxy-service-cjfvx:portname1/proxy/: foo (200; 17.263316ms)
Jan  1 13:55:43.715: INFO: (16) /api/v1/namespaces/proxy-5878/pods/proxy-service-cjfvx-75ckd:162/proxy/: bar (200; 17.345988ms)
Jan  1 13:55:43.749: INFO: (17) /api/v1/namespaces/proxy-5878/pods/http:proxy-service-cjfvx-75ckd:1080/proxy/: ... (200; 33.446628ms)
Jan  1 13:55:43.749: INFO: (17) /api/v1/namespaces/proxy-5878/pods/http:proxy-service-cjfvx-75ckd:160/proxy/: foo (200; 33.329303ms)
Jan  1 13:55:43.749: INFO: (17) /api/v1/namespaces/proxy-5878/pods/proxy-service-cjfvx-75ckd/proxy/: test (200; 33.402863ms)
Jan  1 13:55:43.749: INFO: (17) /api/v1/namespaces/proxy-5878/pods/https:proxy-service-cjfvx-75ckd:462/proxy/: tls qux (200; 33.620413ms)
Jan  1 13:55:43.750: INFO: (17) /api/v1/namespaces/proxy-5878/pods/proxy-service-cjfvx-75ckd:1080/proxy/: test<... (200; 34.513749ms)
Jan  1 13:55:43.750: INFO: (17) /api/v1/namespaces/proxy-5878/pods/https:proxy-service-cjfvx-75ckd:460/proxy/: tls baz (200; 34.928086ms)
Jan  1 13:55:43.753: INFO: (17) /api/v1/namespaces/proxy-5878/pods/https:proxy-service-cjfvx-75ckd:443/proxy/: ... (200; 27.928924ms)
Jan  1 13:55:43.785: INFO: (18) /api/v1/namespaces/proxy-5878/pods/https:proxy-service-cjfvx-75ckd:443/proxy/: test<... (200; 28.385317ms)
Jan  1 13:55:43.786: INFO: (18) /api/v1/namespaces/proxy-5878/pods/https:proxy-service-cjfvx-75ckd:460/proxy/: tls baz (200; 28.300271ms)
Jan  1 13:55:43.786: INFO: (18) /api/v1/namespaces/proxy-5878/services/https:proxy-service-cjfvx:tlsportname2/proxy/: tls qux (200; 28.6762ms)
Jan  1 13:55:43.786: INFO: (18) /api/v1/namespaces/proxy-5878/pods/http:proxy-service-cjfvx-75ckd:162/proxy/: bar (200; 28.773262ms)
Jan  1 13:55:43.786: INFO: (18) /api/v1/namespaces/proxy-5878/pods/https:proxy-service-cjfvx-75ckd:462/proxy/: tls qux (200; 28.471348ms)
Jan  1 13:55:43.786: INFO: (18) /api/v1/namespaces/proxy-5878/pods/proxy-service-cjfvx-75ckd/proxy/: test (200; 28.688749ms)
Jan  1 13:55:43.786: INFO: (18) /api/v1/namespaces/proxy-5878/pods/http:proxy-service-cjfvx-75ckd:160/proxy/: foo (200; 28.958629ms)
Jan  1 13:55:43.786: INFO: (18) /api/v1/namespaces/proxy-5878/pods/proxy-service-cjfvx-75ckd:162/proxy/: bar (200; 28.777334ms)
Jan  1 13:55:43.787: INFO: (18) /api/v1/namespaces/proxy-5878/services/http:proxy-service-cjfvx:portname1/proxy/: foo (200; 29.610189ms)
Jan  1 13:55:43.789: INFO: (18) /api/v1/namespaces/proxy-5878/services/http:proxy-service-cjfvx:portname2/proxy/: bar (200; 32.105443ms)
Jan  1 13:55:43.806: INFO: (19) /api/v1/namespaces/proxy-5878/pods/http:proxy-service-cjfvx-75ckd:160/proxy/: foo (200; 15.258957ms)
Jan  1 13:55:43.808: INFO: (19) /api/v1/namespaces/proxy-5878/pods/proxy-service-cjfvx-75ckd/proxy/: test (200; 18.060232ms)
Jan  1 13:55:43.808: INFO: (19) /api/v1/namespaces/proxy-5878/pods/proxy-service-cjfvx-75ckd:1080/proxy/: test<... (200; 18.147428ms)
Jan  1 13:55:43.809: INFO: (19) /api/v1/namespaces/proxy-5878/pods/proxy-service-cjfvx-75ckd:162/proxy/: bar (200; 18.067882ms)
Jan  1 13:55:43.809: INFO: (19) /api/v1/namespaces/proxy-5878/pods/proxy-service-cjfvx-75ckd:160/proxy/: foo (200; 18.325137ms)
Jan  1 13:55:43.809: INFO: (19) /api/v1/namespaces/proxy-5878/pods/http:proxy-service-cjfvx-75ckd:162/proxy/: bar (200; 18.588125ms)
Jan  1 13:55:43.809: INFO: (19) /api/v1/namespaces/proxy-5878/pods/http:proxy-service-cjfvx-75ckd:1080/proxy/: ... (200; 18.836191ms)
Jan  1 13:55:43.809: INFO: (19) /api/v1/namespaces/proxy-5878/pods/https:proxy-service-cjfvx-75ckd:462/proxy/: tls qux (200; 19.353009ms)
Jan  1 13:55:43.809: INFO: (19) /api/v1/namespaces/proxy-5878/pods/https:proxy-service-cjfvx-75ckd:443/proxy/: >> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting the proxy server
Jan  1 13:55:57.455: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 13:55:57.634: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5117" for this suite.
Jan  1 13:56:03.685: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 13:56:03.873: INFO: namespace kubectl-5117 deletion completed in 6.228293312s

• [SLOW TEST:6.546 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support proxy with --port 0  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 13:56:03.875: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 13:56:04.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-5360" for this suite.
Jan  1 13:56:10.156: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 13:56:10.313: INFO: namespace kubelet-test-5360 deletion completed in 6.19283401s

• [SLOW TEST:6.438 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should be possible to delete [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 13:56:10.314: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan  1 13:56:10.520: INFO: Waiting up to 5m0s for pod "downwardapi-volume-abbdb098-62c1-4d58-8d0e-72de567637fe" in namespace "projected-5264" to be "success or failure"
Jan  1 13:56:10.533: INFO: Pod "downwardapi-volume-abbdb098-62c1-4d58-8d0e-72de567637fe": Phase="Pending", Reason="", readiness=false. Elapsed: 12.187836ms
Jan  1 13:56:12.548: INFO: Pod "downwardapi-volume-abbdb098-62c1-4d58-8d0e-72de567637fe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027275938s
Jan  1 13:56:14.561: INFO: Pod "downwardapi-volume-abbdb098-62c1-4d58-8d0e-72de567637fe": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039780791s
Jan  1 13:56:16.614: INFO: Pod "downwardapi-volume-abbdb098-62c1-4d58-8d0e-72de567637fe": Phase="Pending", Reason="", readiness=false. Elapsed: 6.092593337s
Jan  1 13:56:18.631: INFO: Pod "downwardapi-volume-abbdb098-62c1-4d58-8d0e-72de567637fe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.109886517s
STEP: Saw pod success
Jan  1 13:56:18.631: INFO: Pod "downwardapi-volume-abbdb098-62c1-4d58-8d0e-72de567637fe" satisfied condition "success or failure"
Jan  1 13:56:18.638: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-abbdb098-62c1-4d58-8d0e-72de567637fe container client-container: 
STEP: delete the pod
Jan  1 13:56:18.781: INFO: Waiting for pod downwardapi-volume-abbdb098-62c1-4d58-8d0e-72de567637fe to disappear
Jan  1 13:56:18.790: INFO: Pod downwardapi-volume-abbdb098-62c1-4d58-8d0e-72de567637fe no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 13:56:18.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5264" for this suite.
Jan  1 13:56:24.836: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 13:56:24.991: INFO: namespace projected-5264 deletion completed in 6.190521386s

• [SLOW TEST:14.678 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 13:56:24.993: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-034e9c63-e1ae-4d9c-a616-488675dc334c
STEP: Creating a pod to test consume configMaps
Jan  1 13:56:25.298: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-fad84437-1c65-4cdc-bc46-84867dde2414" in namespace "projected-2271" to be "success or failure"
Jan  1 13:56:25.364: INFO: Pod "pod-projected-configmaps-fad84437-1c65-4cdc-bc46-84867dde2414": Phase="Pending", Reason="", readiness=false. Elapsed: 65.633479ms
Jan  1 13:56:27.376: INFO: Pod "pod-projected-configmaps-fad84437-1c65-4cdc-bc46-84867dde2414": Phase="Pending", Reason="", readiness=false. Elapsed: 2.076935006s
Jan  1 13:56:29.403: INFO: Pod "pod-projected-configmaps-fad84437-1c65-4cdc-bc46-84867dde2414": Phase="Pending", Reason="", readiness=false. Elapsed: 4.104174376s
Jan  1 13:56:31.436: INFO: Pod "pod-projected-configmaps-fad84437-1c65-4cdc-bc46-84867dde2414": Phase="Pending", Reason="", readiness=false. Elapsed: 6.137366118s
Jan  1 13:56:33.466: INFO: Pod "pod-projected-configmaps-fad84437-1c65-4cdc-bc46-84867dde2414": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.167643434s
STEP: Saw pod success
Jan  1 13:56:33.466: INFO: Pod "pod-projected-configmaps-fad84437-1c65-4cdc-bc46-84867dde2414" satisfied condition "success or failure"
Jan  1 13:56:33.470: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-fad84437-1c65-4cdc-bc46-84867dde2414 container projected-configmap-volume-test: 
STEP: delete the pod
Jan  1 13:56:33.557: INFO: Waiting for pod pod-projected-configmaps-fad84437-1c65-4cdc-bc46-84867dde2414 to disappear
Jan  1 13:56:33.631: INFO: Pod pod-projected-configmaps-fad84437-1c65-4cdc-bc46-84867dde2414 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 13:56:33.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2271" for this suite.
Jan  1 13:56:39.681: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 13:56:39.770: INFO: namespace projected-2271 deletion completed in 6.126301986s

• [SLOW TEST:14.777 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 13:56:39.770: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-1880
[It] Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-1880
STEP: Creating statefulset with conflicting port in namespace statefulset-1880
STEP: Waiting until pod test-pod will start running in namespace statefulset-1880
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-1880
Jan  1 13:56:50.058: INFO: Observed stateful pod in namespace: statefulset-1880, name: ss-0, uid: cfeb6577-1e53-42fc-8d23-0e73935d9dc4, status phase: Pending. Waiting for statefulset controller to delete.
Jan  1 13:56:56.528: INFO: Observed stateful pod in namespace: statefulset-1880, name: ss-0, uid: cfeb6577-1e53-42fc-8d23-0e73935d9dc4, status phase: Failed. Waiting for statefulset controller to delete.
Jan  1 13:56:56.561: INFO: Observed stateful pod in namespace: statefulset-1880, name: ss-0, uid: cfeb6577-1e53-42fc-8d23-0e73935d9dc4, status phase: Failed. Waiting for statefulset controller to delete.
Jan  1 13:56:56.576: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-1880
STEP: Removing pod with conflicting port in namespace statefulset-1880
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-1880 and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Jan  1 13:57:04.760: INFO: Deleting all statefulset in ns statefulset-1880
Jan  1 13:57:04.765: INFO: Scaling statefulset ss to 0
Jan  1 13:57:24.836: INFO: Waiting for statefulset status.replicas updated to 0
Jan  1 13:57:24.841: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 13:57:24.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-1880" for this suite.
Jan  1 13:57:32.900: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 13:57:33.034: INFO: namespace statefulset-1880 deletion completed in 8.164099046s

• [SLOW TEST:53.264 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Should recreate evicted statefulset [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 13:57:33.038: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Jan  1 13:57:33.141: INFO: Waiting up to 5m0s for pod "downward-api-9d35c89a-2770-49a1-952a-8c83f92da425" in namespace "downward-api-1192" to be "success or failure"
Jan  1 13:57:33.183: INFO: Pod "downward-api-9d35c89a-2770-49a1-952a-8c83f92da425": Phase="Pending", Reason="", readiness=false. Elapsed: 41.776735ms
Jan  1 13:57:35.195: INFO: Pod "downward-api-9d35c89a-2770-49a1-952a-8c83f92da425": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053860096s
Jan  1 13:57:37.200: INFO: Pod "downward-api-9d35c89a-2770-49a1-952a-8c83f92da425": Phase="Pending", Reason="", readiness=false. Elapsed: 4.059298423s
Jan  1 13:57:39.208: INFO: Pod "downward-api-9d35c89a-2770-49a1-952a-8c83f92da425": Phase="Pending", Reason="", readiness=false. Elapsed: 6.066913147s
Jan  1 13:57:41.222: INFO: Pod "downward-api-9d35c89a-2770-49a1-952a-8c83f92da425": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.080732228s
STEP: Saw pod success
Jan  1 13:57:41.222: INFO: Pod "downward-api-9d35c89a-2770-49a1-952a-8c83f92da425" satisfied condition "success or failure"
Jan  1 13:57:41.230: INFO: Trying to get logs from node iruya-node pod downward-api-9d35c89a-2770-49a1-952a-8c83f92da425 container dapi-container: 
STEP: delete the pod
Jan  1 13:57:41.327: INFO: Waiting for pod downward-api-9d35c89a-2770-49a1-952a-8c83f92da425 to disappear
Jan  1 13:57:41.370: INFO: Pod downward-api-9d35c89a-2770-49a1-952a-8c83f92da425 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 13:57:41.371: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1192" for this suite.
Jan  1 13:57:47.454: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 13:57:47.616: INFO: namespace downward-api-1192 deletion completed in 6.236348755s

• [SLOW TEST:14.578 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run rc 
  should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 13:57:47.616: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1456
[It] should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan  1 13:57:47.744: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-4020'
Jan  1 13:57:49.499: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan  1 13:57:49.499: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created
STEP: confirm that you can get logs from an rc
Jan  1 13:57:49.646: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-wbbnk]
Jan  1 13:57:49.646: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-wbbnk" in namespace "kubectl-4020" to be "running and ready"
Jan  1 13:57:49.676: INFO: Pod "e2e-test-nginx-rc-wbbnk": Phase="Pending", Reason="", readiness=false. Elapsed: 30.324906ms
Jan  1 13:57:51.690: INFO: Pod "e2e-test-nginx-rc-wbbnk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043842919s
Jan  1 13:57:53.699: INFO: Pod "e2e-test-nginx-rc-wbbnk": Phase="Pending", Reason="", readiness=false. Elapsed: 4.05331919s
Jan  1 13:57:55.707: INFO: Pod "e2e-test-nginx-rc-wbbnk": Phase="Pending", Reason="", readiness=false. Elapsed: 6.06068616s
Jan  1 13:57:57.715: INFO: Pod "e2e-test-nginx-rc-wbbnk": Phase="Running", Reason="", readiness=true. Elapsed: 8.069606021s
Jan  1 13:57:57.716: INFO: Pod "e2e-test-nginx-rc-wbbnk" satisfied condition "running and ready"
Jan  1 13:57:57.716: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-wbbnk]
Jan  1 13:57:57.716: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=kubectl-4020'
Jan  1 13:57:57.915: INFO: stderr: ""
Jan  1 13:57:57.916: INFO: stdout: ""
[AfterEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1461
Jan  1 13:57:57.916: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-4020'
Jan  1 13:57:58.053: INFO: stderr: ""
Jan  1 13:57:58.053: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 13:57:58.053: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4020" for this suite.
Jan  1 13:58:04.083: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 13:58:04.192: INFO: namespace kubectl-4020 deletion completed in 6.12933718s

• [SLOW TEST:16.576 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create an rc from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl rolling-update 
  should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 13:58:04.192: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1516
[It] should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan  1 13:58:04.325: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-1329'
Jan  1 13:58:04.495: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan  1 13:58:04.496: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
STEP: rolling-update to same image controller
Jan  1 13:58:04.569: INFO: scanned /root for discovery docs: 
Jan  1 13:58:04.570: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-1329'
Jan  1 13:58:27.171: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Jan  1 13:58:27.171: INFO: stdout: "Created e2e-test-nginx-rc-b48eee776b648584adf266610d6da1f7\nScaling up e2e-test-nginx-rc-b48eee776b648584adf266610d6da1f7 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-b48eee776b648584adf266610d6da1f7 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-b48eee776b648584adf266610d6da1f7 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
Jan  1 13:58:27.171: INFO: stdout: "Created e2e-test-nginx-rc-b48eee776b648584adf266610d6da1f7\nScaling up e2e-test-nginx-rc-b48eee776b648584adf266610d6da1f7 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-b48eee776b648584adf266610d6da1f7 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-b48eee776b648584adf266610d6da1f7 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up.
Jan  1 13:58:27.172: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-1329'
Jan  1 13:58:27.396: INFO: stderr: ""
Jan  1 13:58:27.396: INFO: stdout: "e2e-test-nginx-rc-b48eee776b648584adf266610d6da1f7-b6rp7 "
Jan  1 13:58:27.397: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-b48eee776b648584adf266610d6da1f7-b6rp7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1329'
Jan  1 13:58:27.491: INFO: stderr: ""
Jan  1 13:58:27.491: INFO: stdout: "true"
Jan  1 13:58:27.491: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-b48eee776b648584adf266610d6da1f7-b6rp7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1329'
Jan  1 13:58:27.600: INFO: stderr: ""
Jan  1 13:58:27.600: INFO: stdout: "docker.io/library/nginx:1.14-alpine"
Jan  1 13:58:27.600: INFO: e2e-test-nginx-rc-b48eee776b648584adf266610d6da1f7-b6rp7 is verified up and running
[AfterEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1522
Jan  1 13:58:27.600: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-1329'
Jan  1 13:58:27.714: INFO: stderr: ""
Jan  1 13:58:27.714: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 13:58:27.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1329" for this suite.
Jan  1 13:58:33.800: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 13:58:34.000: INFO: namespace kubectl-1329 deletion completed in 6.279604151s

• [SLOW TEST:29.808 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support rolling-update to same image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 13:58:34.001: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-dda099d9-36b6-49ca-b335-423f86e17050
STEP: Creating a pod to test consume configMaps
Jan  1 13:58:34.134: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-d91d95ee-2b49-4204-905a-d557866c3483" in namespace "projected-2013" to be "success or failure"
Jan  1 13:58:34.143: INFO: Pod "pod-projected-configmaps-d91d95ee-2b49-4204-905a-d557866c3483": Phase="Pending", Reason="", readiness=false. Elapsed: 9.204423ms
Jan  1 13:58:36.155: INFO: Pod "pod-projected-configmaps-d91d95ee-2b49-4204-905a-d557866c3483": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021203148s
Jan  1 13:58:38.161: INFO: Pod "pod-projected-configmaps-d91d95ee-2b49-4204-905a-d557866c3483": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027210047s
Jan  1 13:58:40.182: INFO: Pod "pod-projected-configmaps-d91d95ee-2b49-4204-905a-d557866c3483": Phase="Pending", Reason="", readiness=false. Elapsed: 6.047861873s
Jan  1 13:58:42.186: INFO: Pod "pod-projected-configmaps-d91d95ee-2b49-4204-905a-d557866c3483": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.051606215s
STEP: Saw pod success
Jan  1 13:58:42.186: INFO: Pod "pod-projected-configmaps-d91d95ee-2b49-4204-905a-d557866c3483" satisfied condition "success or failure"
Jan  1 13:58:42.188: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-d91d95ee-2b49-4204-905a-d557866c3483 container projected-configmap-volume-test: 
STEP: delete the pod
Jan  1 13:58:42.542: INFO: Waiting for pod pod-projected-configmaps-d91d95ee-2b49-4204-905a-d557866c3483 to disappear
Jan  1 13:58:42.573: INFO: Pod pod-projected-configmaps-d91d95ee-2b49-4204-905a-d557866c3483 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 13:58:42.573: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2013" for this suite.
Jan  1 13:58:48.699: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 13:58:48.880: INFO: namespace projected-2013 deletion completed in 6.298380469s

• [SLOW TEST:14.880 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 13:58:48.881: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with configMap that has name projected-configmap-test-upd-2da7f49c-b6d5-41ca-a757-2c35f87a9c1b
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-2da7f49c-b6d5-41ca-a757-2c35f87a9c1b
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 13:59:01.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2116" for this suite.
Jan  1 13:59:23.328: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 13:59:23.455: INFO: namespace projected-2116 deletion completed in 22.154217883s

• [SLOW TEST:34.575 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 13:59:23.457: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Jan  1 13:59:23.559: INFO: Pod name pod-release: Found 0 pods out of 1
Jan  1 13:59:28.591: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 13:59:29.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-5426" for this suite.
Jan  1 13:59:35.714: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 13:59:36.077: INFO: namespace replication-controller-5426 deletion completed in 6.385177004s

• [SLOW TEST:12.620 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 13:59:36.079: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the initial replication controller
Jan  1 13:59:36.166: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4023'
Jan  1 13:59:36.587: INFO: stderr: ""
Jan  1 13:59:36.588: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan  1 13:59:36.589: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4023'
Jan  1 13:59:36.747: INFO: stderr: ""
Jan  1 13:59:36.748: INFO: stdout: "update-demo-nautilus-jvn6x update-demo-nautilus-tjgtj "
Jan  1 13:59:36.748: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jvn6x -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4023'
Jan  1 13:59:37.084: INFO: stderr: ""
Jan  1 13:59:37.084: INFO: stdout: ""
Jan  1 13:59:37.084: INFO: update-demo-nautilus-jvn6x is created but not running
Jan  1 13:59:42.085: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4023'
Jan  1 13:59:42.402: INFO: stderr: ""
Jan  1 13:59:42.403: INFO: stdout: "update-demo-nautilus-jvn6x update-demo-nautilus-tjgtj "
Jan  1 13:59:42.403: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jvn6x -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4023'
Jan  1 13:59:42.987: INFO: stderr: ""
Jan  1 13:59:42.988: INFO: stdout: ""
Jan  1 13:59:42.988: INFO: update-demo-nautilus-jvn6x is created but not running
Jan  1 13:59:47.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4023'
Jan  1 13:59:48.171: INFO: stderr: ""
Jan  1 13:59:48.171: INFO: stdout: "update-demo-nautilus-jvn6x update-demo-nautilus-tjgtj "
Jan  1 13:59:48.171: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jvn6x -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4023'
Jan  1 13:59:48.260: INFO: stderr: ""
Jan  1 13:59:48.260: INFO: stdout: "true"
Jan  1 13:59:48.261: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jvn6x -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4023'
Jan  1 13:59:48.407: INFO: stderr: ""
Jan  1 13:59:48.407: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  1 13:59:48.407: INFO: validating pod update-demo-nautilus-jvn6x
Jan  1 13:59:48.435: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  1 13:59:48.436: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  1 13:59:48.436: INFO: update-demo-nautilus-jvn6x is verified up and running
Jan  1 13:59:48.436: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tjgtj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4023'
Jan  1 13:59:48.594: INFO: stderr: ""
Jan  1 13:59:48.594: INFO: stdout: "true"
Jan  1 13:59:48.595: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tjgtj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4023'
Jan  1 13:59:48.704: INFO: stderr: ""
Jan  1 13:59:48.704: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  1 13:59:48.704: INFO: validating pod update-demo-nautilus-tjgtj
Jan  1 13:59:48.727: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  1 13:59:48.727: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  1 13:59:48.727: INFO: update-demo-nautilus-tjgtj is verified up and running
STEP: rolling-update to new replication controller
Jan  1 13:59:48.734: INFO: scanned /root for discovery docs: 
Jan  1 13:59:48.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-4023'
Jan  1 14:00:21.190: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Jan  1 14:00:21.190: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan  1 14:00:21.191: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4023'
Jan  1 14:00:21.407: INFO: stderr: ""
Jan  1 14:00:21.408: INFO: stdout: "update-demo-kitten-qmpln update-demo-kitten-t8v84 "
Jan  1 14:00:21.408: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-qmpln -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4023'
Jan  1 14:00:21.517: INFO: stderr: ""
Jan  1 14:00:21.517: INFO: stdout: "true"
Jan  1 14:00:21.517: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-qmpln -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4023'
Jan  1 14:00:21.658: INFO: stderr: ""
Jan  1 14:00:21.659: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Jan  1 14:00:21.659: INFO: validating pod update-demo-kitten-qmpln
Jan  1 14:00:21.688: INFO: got data: {
  "image": "kitten.jpg"
}

Jan  1 14:00:21.688: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Jan  1 14:00:21.688: INFO: update-demo-kitten-qmpln is verified up and running
Jan  1 14:00:21.689: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-t8v84 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4023'
Jan  1 14:00:21.790: INFO: stderr: ""
Jan  1 14:00:21.790: INFO: stdout: "true"
Jan  1 14:00:21.790: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-t8v84 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4023'
Jan  1 14:00:21.964: INFO: stderr: ""
Jan  1 14:00:21.964: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Jan  1 14:00:21.965: INFO: validating pod update-demo-kitten-t8v84
Jan  1 14:00:22.001: INFO: got data: {
  "image": "kitten.jpg"
}

Jan  1 14:00:22.001: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Jan  1 14:00:22.001: INFO: update-demo-kitten-t8v84 is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 14:00:22.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4023" for this suite.
Jan  1 14:00:46.109: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 14:00:46.273: INFO: namespace kubectl-4023 deletion completed in 24.262212363s

• [SLOW TEST:70.195 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should do a rolling update of a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 14:00:46.274: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-6473
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a new StatefulSet
Jan  1 14:00:46.425: INFO: Found 0 stateful pods, waiting for 3
Jan  1 14:00:56.449: INFO: Found 2 stateful pods, waiting for 3
Jan  1 14:01:06.438: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan  1 14:01:06.438: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan  1 14:01:06.438: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Jan  1 14:01:16.443: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan  1 14:01:16.443: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan  1 14:01:16.443: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Jan  1 14:01:16.509: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Jan  1 14:01:26.594: INFO: Updating stateful set ss2
Jan  1 14:01:26.649: INFO: Waiting for Pod statefulset-6473/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
STEP: Restoring Pods to the correct revision when they are deleted
Jan  1 14:01:36.996: INFO: Found 2 stateful pods, waiting for 3
Jan  1 14:01:47.005: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan  1 14:01:47.005: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan  1 14:01:47.005: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Jan  1 14:01:57.005: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan  1 14:01:57.005: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan  1 14:01:57.005: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Jan  1 14:01:57.030: INFO: Updating stateful set ss2
Jan  1 14:01:57.161: INFO: Waiting for Pod statefulset-6473/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  1 14:02:07.780: INFO: Updating stateful set ss2
Jan  1 14:02:07.851: INFO: Waiting for StatefulSet statefulset-6473/ss2 to complete update
Jan  1 14:02:07.851: INFO: Waiting for Pod statefulset-6473/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  1 14:02:17.873: INFO: Waiting for StatefulSet statefulset-6473/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Jan  1 14:02:27.907: INFO: Deleting all statefulset in ns statefulset-6473
Jan  1 14:02:27.913: INFO: Scaling statefulset ss2 to 0
Jan  1 14:02:47.947: INFO: Waiting for statefulset status.replicas updated to 0
Jan  1 14:02:47.953: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 14:02:47.984: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-6473" for this suite.
Jan  1 14:02:56.064: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 14:02:56.211: INFO: namespace statefulset-6473 deletion completed in 8.217910253s

• [SLOW TEST:129.937 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 14:02:56.213: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir volume type on tmpfs
Jan  1 14:02:56.385: INFO: Waiting up to 5m0s for pod "pod-eeceda20-4e7f-43c1-8559-6083541ff39a" in namespace "emptydir-2149" to be "success or failure"
Jan  1 14:02:56.410: INFO: Pod "pod-eeceda20-4e7f-43c1-8559-6083541ff39a": Phase="Pending", Reason="", readiness=false. Elapsed: 24.571516ms
Jan  1 14:02:58.423: INFO: Pod "pod-eeceda20-4e7f-43c1-8559-6083541ff39a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036922865s
Jan  1 14:03:00.433: INFO: Pod "pod-eeceda20-4e7f-43c1-8559-6083541ff39a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047422816s
Jan  1 14:03:02.441: INFO: Pod "pod-eeceda20-4e7f-43c1-8559-6083541ff39a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.055263622s
Jan  1 14:03:04.457: INFO: Pod "pod-eeceda20-4e7f-43c1-8559-6083541ff39a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.071075702s
STEP: Saw pod success
Jan  1 14:03:04.457: INFO: Pod "pod-eeceda20-4e7f-43c1-8559-6083541ff39a" satisfied condition "success or failure"
Jan  1 14:03:04.466: INFO: Trying to get logs from node iruya-node pod pod-eeceda20-4e7f-43c1-8559-6083541ff39a container test-container: 
STEP: delete the pod
Jan  1 14:03:04.549: INFO: Waiting for pod pod-eeceda20-4e7f-43c1-8559-6083541ff39a to disappear
Jan  1 14:03:04.555: INFO: Pod pod-eeceda20-4e7f-43c1-8559-6083541ff39a no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 14:03:04.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2149" for this suite.
Jan  1 14:03:10.628: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 14:03:10.849: INFO: namespace emptydir-2149 deletion completed in 6.285709187s

• [SLOW TEST:14.636 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 14:03:10.850: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan  1 14:03:10.995: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Jan  1 14:03:16.022: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jan  1 14:03:20.072: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Jan  1 14:03:20.114: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:deployment-3837,SelfLink:/apis/apps/v1/namespaces/deployment-3837/deployments/test-cleanup-deployment,UID:ab3603f9-6bfa-4b19-b25a-f1a4504a7aa4,ResourceVersion:18901992,Generation:1,CreationTimestamp:2020-01-01 14:03:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},}

Jan  1 14:03:20.122: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil.
Jan  1 14:03:20.122: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment":
Jan  1 14:03:20.123: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:deployment-3837,SelfLink:/apis/apps/v1/namespaces/deployment-3837/replicasets/test-cleanup-controller,UID:90c5132c-92aa-4cce-a35a-7266b973a799,ResourceVersion:18901993,Generation:1,CreationTimestamp:2020-01-01 14:03:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment ab3603f9-6bfa-4b19-b25a-f1a4504a7aa4 0xc002c664e7 0xc002c664e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Jan  1 14:03:20.202: INFO: Pod "test-cleanup-controller-qprnw" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-qprnw,GenerateName:test-cleanup-controller-,Namespace:deployment-3837,SelfLink:/api/v1/namespaces/deployment-3837/pods/test-cleanup-controller-qprnw,UID:7197a139-b530-4ecf-a7f0-e1d71e02ee62,ResourceVersion:18901989,Generation:0,CreationTimestamp:2020-01-01 14:03:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller 90c5132c-92aa-4cce-a35a-7266b973a799 0xc002bb0157 0xc002bb0158}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hklpq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hklpq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-hklpq true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002bb0340} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002bb0360}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 14:03:11 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 14:03:19 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 14:03:19 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 14:03:11 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2020-01-01 14:03:11 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-01 14:03:17 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://d5d721de449688ad0f4ad3392a5265920f486599b8aff9a525507064ce926292}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 14:03:20.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-3837" for this suite.
Jan  1 14:03:28.670: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 14:03:28.789: INFO: namespace deployment-3837 deletion completed in 8.524002846s

• [SLOW TEST:17.940 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 14:03:28.790: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override arguments
Jan  1 14:03:28.931: INFO: Waiting up to 5m0s for pod "client-containers-af08f89e-b5db-4dea-8321-c458cdb39def" in namespace "containers-2526" to be "success or failure"
Jan  1 14:03:28.956: INFO: Pod "client-containers-af08f89e-b5db-4dea-8321-c458cdb39def": Phase="Pending", Reason="", readiness=false. Elapsed: 23.959659ms
Jan  1 14:03:30.966: INFO: Pod "client-containers-af08f89e-b5db-4dea-8321-c458cdb39def": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034491231s
Jan  1 14:03:32.982: INFO: Pod "client-containers-af08f89e-b5db-4dea-8321-c458cdb39def": Phase="Pending", Reason="", readiness=false. Elapsed: 4.05025372s
Jan  1 14:03:34.990: INFO: Pod "client-containers-af08f89e-b5db-4dea-8321-c458cdb39def": Phase="Pending", Reason="", readiness=false. Elapsed: 6.058245507s
Jan  1 14:03:37.009: INFO: Pod "client-containers-af08f89e-b5db-4dea-8321-c458cdb39def": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.077754513s
STEP: Saw pod success
Jan  1 14:03:37.010: INFO: Pod "client-containers-af08f89e-b5db-4dea-8321-c458cdb39def" satisfied condition "success or failure"
Jan  1 14:03:37.017: INFO: Trying to get logs from node iruya-node pod client-containers-af08f89e-b5db-4dea-8321-c458cdb39def container test-container: 
STEP: delete the pod
Jan  1 14:03:37.077: INFO: Waiting for pod client-containers-af08f89e-b5db-4dea-8321-c458cdb39def to disappear
Jan  1 14:03:37.091: INFO: Pod client-containers-af08f89e-b5db-4dea-8321-c458cdb39def no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 14:03:37.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-2526" for this suite.
Jan  1 14:03:43.163: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 14:03:43.338: INFO: namespace containers-2526 deletion completed in 6.239232817s

• [SLOW TEST:14.548 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 14:03:43.339: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod test-webserver-dec6662e-8ede-4d50-bf90-508f548505a7 in namespace container-probe-7352
Jan  1 14:03:51.508: INFO: Started pod test-webserver-dec6662e-8ede-4d50-bf90-508f548505a7 in namespace container-probe-7352
STEP: checking the pod's current state and verifying that restartCount is present
Jan  1 14:03:51.518: INFO: Initial restart count of pod test-webserver-dec6662e-8ede-4d50-bf90-508f548505a7 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 14:07:53.497: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-7352" for this suite.
Jan  1 14:07:59.558: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 14:07:59.725: INFO: namespace container-probe-7352 deletion completed in 6.211499626s

• [SLOW TEST:256.386 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 14:07:59.726: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan  1 14:08:27.886: INFO: Container started at 2020-01-01 14:08:06 +0000 UTC, pod became ready at 2020-01-01 14:08:26 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 14:08:27.887: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-8338" for this suite.
Jan  1 14:08:49.980: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 14:08:50.106: INFO: namespace container-probe-8338 deletion completed in 22.183460629s

• [SLOW TEST:50.380 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 14:08:50.107: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-bd562919-12a9-4c5f-a945-12f127e229fe
STEP: Creating a pod to test consume configMaps
Jan  1 14:08:50.290: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-1cc134ac-1d79-432d-834c-05f234c2e288" in namespace "projected-1096" to be "success or failure"
Jan  1 14:08:50.338: INFO: Pod "pod-projected-configmaps-1cc134ac-1d79-432d-834c-05f234c2e288": Phase="Pending", Reason="", readiness=false. Elapsed: 48.393232ms
Jan  1 14:08:52.363: INFO: Pod "pod-projected-configmaps-1cc134ac-1d79-432d-834c-05f234c2e288": Phase="Pending", Reason="", readiness=false. Elapsed: 2.073173535s
Jan  1 14:08:54.375: INFO: Pod "pod-projected-configmaps-1cc134ac-1d79-432d-834c-05f234c2e288": Phase="Pending", Reason="", readiness=false. Elapsed: 4.085437466s
Jan  1 14:08:56.383: INFO: Pod "pod-projected-configmaps-1cc134ac-1d79-432d-834c-05f234c2e288": Phase="Pending", Reason="", readiness=false. Elapsed: 6.092687927s
Jan  1 14:08:58.390: INFO: Pod "pod-projected-configmaps-1cc134ac-1d79-432d-834c-05f234c2e288": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.100203007s
STEP: Saw pod success
Jan  1 14:08:58.390: INFO: Pod "pod-projected-configmaps-1cc134ac-1d79-432d-834c-05f234c2e288" satisfied condition "success or failure"
Jan  1 14:08:58.396: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-1cc134ac-1d79-432d-834c-05f234c2e288 container projected-configmap-volume-test: 
STEP: delete the pod
Jan  1 14:08:58.478: INFO: Waiting for pod pod-projected-configmaps-1cc134ac-1d79-432d-834c-05f234c2e288 to disappear
Jan  1 14:08:58.484: INFO: Pod pod-projected-configmaps-1cc134ac-1d79-432d-834c-05f234c2e288 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 14:08:58.484: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1096" for this suite.
Jan  1 14:09:04.523: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 14:09:04.649: INFO: namespace projected-1096 deletion completed in 6.157131086s

• [SLOW TEST:14.542 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 14:09:04.651: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Jan  1 14:09:04.760: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 14:09:18.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-7359" for this suite.
Jan  1 14:09:24.314: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 14:09:24.467: INFO: namespace init-container-7359 deletion completed in 6.179723533s

• [SLOW TEST:19.817 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl version 
  should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 14:09:24.472: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan  1 14:09:24.547: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version'
Jan  1 14:09:24.688: INFO: stderr: ""
Jan  1 14:09:24.688: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.7\", GitCommit:\"6c143d35bb11d74970e7bc0b6c45b6bfdffc0bd4\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T16:55:20Z\", GoVersion:\"go1.12.14\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.1\", GitCommit:\"4485c6f18cee9a5d3c3b4e523bd27972b1b53892\", GitTreeState:\"clean\", BuildDate:\"2019-07-18T09:09:21Z\", GoVersion:\"go1.12.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 14:09:24.689: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6390" for this suite.
Jan  1 14:09:30.723: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 14:09:30.886: INFO: namespace kubectl-6390 deletion completed in 6.18716108s

• [SLOW TEST:6.414 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl version
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check is all data is printed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 14:09:30.887: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:179
[It] should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 14:09:31.009: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-219" for this suite.
Jan  1 14:09:53.045: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 14:09:53.184: INFO: namespace pods-219 deletion completed in 22.170027164s

• [SLOW TEST:22.297 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be submitted and removed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 14:09:53.185: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on tmpfs
Jan  1 14:09:53.286: INFO: Waiting up to 5m0s for pod "pod-87f12ee3-a988-457f-8b0f-bcaf19243e33" in namespace "emptydir-7836" to be "success or failure"
Jan  1 14:09:53.304: INFO: Pod "pod-87f12ee3-a988-457f-8b0f-bcaf19243e33": Phase="Pending", Reason="", readiness=false. Elapsed: 17.601719ms
Jan  1 14:09:55.325: INFO: Pod "pod-87f12ee3-a988-457f-8b0f-bcaf19243e33": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038664069s
Jan  1 14:09:57.337: INFO: Pod "pod-87f12ee3-a988-457f-8b0f-bcaf19243e33": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050887398s
Jan  1 14:09:59.354: INFO: Pod "pod-87f12ee3-a988-457f-8b0f-bcaf19243e33": Phase="Pending", Reason="", readiness=false. Elapsed: 6.067701112s
Jan  1 14:10:01.366: INFO: Pod "pod-87f12ee3-a988-457f-8b0f-bcaf19243e33": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.080033779s
STEP: Saw pod success
Jan  1 14:10:01.366: INFO: Pod "pod-87f12ee3-a988-457f-8b0f-bcaf19243e33" satisfied condition "success or failure"
Jan  1 14:10:01.372: INFO: Trying to get logs from node iruya-node pod pod-87f12ee3-a988-457f-8b0f-bcaf19243e33 container test-container: 
STEP: delete the pod
Jan  1 14:10:01.452: INFO: Waiting for pod pod-87f12ee3-a988-457f-8b0f-bcaf19243e33 to disappear
Jan  1 14:10:01.457: INFO: Pod pod-87f12ee3-a988-457f-8b0f-bcaf19243e33 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 14:10:01.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7836" for this suite.
Jan  1 14:10:07.484: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 14:10:07.652: INFO: namespace emptydir-7836 deletion completed in 6.187422022s

• [SLOW TEST:14.467 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 14:10:07.654: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-835b2863-a1b3-45fb-b5fa-95229007808b
STEP: Creating a pod to test consume configMaps
Jan  1 14:10:07.851: INFO: Waiting up to 5m0s for pod "pod-configmaps-d37427c0-6688-4e9a-becf-262840d88fa6" in namespace "configmap-8869" to be "success or failure"
Jan  1 14:10:07.879: INFO: Pod "pod-configmaps-d37427c0-6688-4e9a-becf-262840d88fa6": Phase="Pending", Reason="", readiness=false. Elapsed: 26.872721ms
Jan  1 14:10:09.896: INFO: Pod "pod-configmaps-d37427c0-6688-4e9a-becf-262840d88fa6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044738901s
Jan  1 14:10:11.916: INFO: Pod "pod-configmaps-d37427c0-6688-4e9a-becf-262840d88fa6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.063844646s
Jan  1 14:10:13.937: INFO: Pod "pod-configmaps-d37427c0-6688-4e9a-becf-262840d88fa6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.085535271s
Jan  1 14:10:15.948: INFO: Pod "pod-configmaps-d37427c0-6688-4e9a-becf-262840d88fa6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.095904596s
STEP: Saw pod success
Jan  1 14:10:15.948: INFO: Pod "pod-configmaps-d37427c0-6688-4e9a-becf-262840d88fa6" satisfied condition "success or failure"
Jan  1 14:10:15.955: INFO: Trying to get logs from node iruya-node pod pod-configmaps-d37427c0-6688-4e9a-becf-262840d88fa6 container configmap-volume-test: 
STEP: delete the pod
Jan  1 14:10:16.014: INFO: Waiting for pod pod-configmaps-d37427c0-6688-4e9a-becf-262840d88fa6 to disappear
Jan  1 14:10:16.021: INFO: Pod pod-configmaps-d37427c0-6688-4e9a-becf-262840d88fa6 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 14:10:16.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8869" for this suite.
Jan  1 14:10:22.055: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 14:10:22.189: INFO: namespace configmap-8869 deletion completed in 6.160559948s

• [SLOW TEST:14.536 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 14:10:22.191: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan  1 14:10:22.400: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6e050e2b-6976-47c4-85bb-44eabeb0d5c3" in namespace "downward-api-9381" to be "success or failure"
Jan  1 14:10:22.541: INFO: Pod "downwardapi-volume-6e050e2b-6976-47c4-85bb-44eabeb0d5c3": Phase="Pending", Reason="", readiness=false. Elapsed: 140.30448ms
Jan  1 14:10:24.560: INFO: Pod "downwardapi-volume-6e050e2b-6976-47c4-85bb-44eabeb0d5c3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.159152549s
Jan  1 14:10:26.579: INFO: Pod "downwardapi-volume-6e050e2b-6976-47c4-85bb-44eabeb0d5c3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.178471259s
Jan  1 14:10:28.592: INFO: Pod "downwardapi-volume-6e050e2b-6976-47c4-85bb-44eabeb0d5c3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.191235676s
Jan  1 14:10:30.610: INFO: Pod "downwardapi-volume-6e050e2b-6976-47c4-85bb-44eabeb0d5c3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.209709812s
STEP: Saw pod success
Jan  1 14:10:30.611: INFO: Pod "downwardapi-volume-6e050e2b-6976-47c4-85bb-44eabeb0d5c3" satisfied condition "success or failure"
Jan  1 14:10:30.619: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-6e050e2b-6976-47c4-85bb-44eabeb0d5c3 container client-container: 
STEP: delete the pod
Jan  1 14:10:30.682: INFO: Waiting for pod downwardapi-volume-6e050e2b-6976-47c4-85bb-44eabeb0d5c3 to disappear
Jan  1 14:10:30.690: INFO: Pod downwardapi-volume-6e050e2b-6976-47c4-85bb-44eabeb0d5c3 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 14:10:30.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9381" for this suite.
Jan  1 14:10:36.823: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 14:10:36.974: INFO: namespace downward-api-9381 deletion completed in 6.279504986s

• [SLOW TEST:14.782 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 14:10:36.974: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: getting the auto-created API token
Jan  1 14:10:37.648: INFO: created pod pod-service-account-defaultsa
Jan  1 14:10:37.648: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Jan  1 14:10:37.683: INFO: created pod pod-service-account-mountsa
Jan  1 14:10:37.683: INFO: pod pod-service-account-mountsa service account token volume mount: true
Jan  1 14:10:37.729: INFO: created pod pod-service-account-nomountsa
Jan  1 14:10:37.729: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Jan  1 14:10:37.747: INFO: created pod pod-service-account-defaultsa-mountspec
Jan  1 14:10:37.747: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Jan  1 14:10:37.792: INFO: created pod pod-service-account-mountsa-mountspec
Jan  1 14:10:37.793: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Jan  1 14:10:37.876: INFO: created pod pod-service-account-nomountsa-mountspec
Jan  1 14:10:37.876: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Jan  1 14:10:37.897: INFO: created pod pod-service-account-defaultsa-nomountspec
Jan  1 14:10:37.897: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Jan  1 14:10:37.936: INFO: created pod pod-service-account-mountsa-nomountspec
Jan  1 14:10:37.936: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Jan  1 14:10:38.008: INFO: created pod pod-service-account-nomountsa-nomountspec
Jan  1 14:10:38.009: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 14:10:38.009: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-8348" for this suite.
Jan  1 14:11:11.326: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 14:11:11.455: INFO: namespace svcaccounts-8348 deletion completed in 33.42983034s

• [SLOW TEST:34.482 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 14:11:11.456: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Jan  1 14:11:13.015: INFO: Pod name wrapped-volume-race-c03229dd-ae41-4969-9b22-a9e9fd00696e: Found 0 pods out of 5
Jan  1 14:11:18.029: INFO: Pod name wrapped-volume-race-c03229dd-ae41-4969-9b22-a9e9fd00696e: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-c03229dd-ae41-4969-9b22-a9e9fd00696e in namespace emptydir-wrapper-7116, will wait for the garbage collector to delete the pods
Jan  1 14:11:44.135: INFO: Deleting ReplicationController wrapped-volume-race-c03229dd-ae41-4969-9b22-a9e9fd00696e took: 17.211116ms
Jan  1 14:11:44.537: INFO: Terminating ReplicationController wrapped-volume-race-c03229dd-ae41-4969-9b22-a9e9fd00696e pods took: 401.924924ms
STEP: Creating RC which spawns configmap-volume pods
Jan  1 14:12:36.699: INFO: Pod name wrapped-volume-race-112ac1c2-7126-4931-b211-37a7df4fee73: Found 0 pods out of 5
Jan  1 14:12:41.718: INFO: Pod name wrapped-volume-race-112ac1c2-7126-4931-b211-37a7df4fee73: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-112ac1c2-7126-4931-b211-37a7df4fee73 in namespace emptydir-wrapper-7116, will wait for the garbage collector to delete the pods
Jan  1 14:13:05.964: INFO: Deleting ReplicationController wrapped-volume-race-112ac1c2-7126-4931-b211-37a7df4fee73 took: 31.969932ms
Jan  1 14:13:06.366: INFO: Terminating ReplicationController wrapped-volume-race-112ac1c2-7126-4931-b211-37a7df4fee73 pods took: 401.833141ms
STEP: Creating RC which spawns configmap-volume pods
Jan  1 14:13:57.175: INFO: Pod name wrapped-volume-race-e40b0ae5-70a7-43a6-8555-a61df76969bc: Found 0 pods out of 5
Jan  1 14:14:02.195: INFO: Pod name wrapped-volume-race-e40b0ae5-70a7-43a6-8555-a61df76969bc: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-e40b0ae5-70a7-43a6-8555-a61df76969bc in namespace emptydir-wrapper-7116, will wait for the garbage collector to delete the pods
Jan  1 14:14:30.359: INFO: Deleting ReplicationController wrapped-volume-race-e40b0ae5-70a7-43a6-8555-a61df76969bc took: 20.99369ms
Jan  1 14:14:30.760: INFO: Terminating ReplicationController wrapped-volume-race-e40b0ae5-70a7-43a6-8555-a61df76969bc pods took: 401.126697ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 14:15:18.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-7116" for this suite.
Jan  1 14:15:28.521: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 14:15:28.668: INFO: namespace emptydir-wrapper-7116 deletion completed in 10.174355668s

• [SLOW TEST:257.212 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 14:15:28.668: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Jan  1 14:15:28.827: INFO: Number of nodes with available pods: 0
Jan  1 14:15:28.827: INFO: Node iruya-node is running more than one daemon pod
Jan  1 14:15:29.850: INFO: Number of nodes with available pods: 0
Jan  1 14:15:29.850: INFO: Node iruya-node is running more than one daemon pod
Jan  1 14:15:31.551: INFO: Number of nodes with available pods: 0
Jan  1 14:15:31.551: INFO: Node iruya-node is running more than one daemon pod
Jan  1 14:15:31.886: INFO: Number of nodes with available pods: 0
Jan  1 14:15:31.887: INFO: Node iruya-node is running more than one daemon pod
Jan  1 14:15:32.845: INFO: Number of nodes with available pods: 0
Jan  1 14:15:32.845: INFO: Node iruya-node is running more than one daemon pod
Jan  1 14:15:33.853: INFO: Number of nodes with available pods: 0
Jan  1 14:15:33.853: INFO: Node iruya-node is running more than one daemon pod
Jan  1 14:15:35.237: INFO: Number of nodes with available pods: 0
Jan  1 14:15:35.237: INFO: Node iruya-node is running more than one daemon pod
Jan  1 14:15:35.915: INFO: Number of nodes with available pods: 0
Jan  1 14:15:35.915: INFO: Node iruya-node is running more than one daemon pod
Jan  1 14:15:36.862: INFO: Number of nodes with available pods: 0
Jan  1 14:15:36.862: INFO: Node iruya-node is running more than one daemon pod
Jan  1 14:15:37.841: INFO: Number of nodes with available pods: 1
Jan  1 14:15:37.842: INFO: Node iruya-node is running more than one daemon pod
Jan  1 14:15:38.848: INFO: Number of nodes with available pods: 1
Jan  1 14:15:38.848: INFO: Node iruya-node is running more than one daemon pod
Jan  1 14:15:39.852: INFO: Number of nodes with available pods: 1
Jan  1 14:15:39.852: INFO: Node iruya-node is running more than one daemon pod
Jan  1 14:15:40.843: INFO: Number of nodes with available pods: 1
Jan  1 14:15:40.843: INFO: Node iruya-node is running more than one daemon pod
Jan  1 14:15:41.845: INFO: Number of nodes with available pods: 2
Jan  1 14:15:41.846: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Jan  1 14:15:42.011: INFO: Number of nodes with available pods: 2
Jan  1 14:15:42.011: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3843, will wait for the garbage collector to delete the pods
Jan  1 14:15:43.110: INFO: Deleting DaemonSet.extensions daemon-set took: 11.059675ms
Jan  1 14:15:43.612: INFO: Terminating DaemonSet.extensions daemon-set pods took: 501.654057ms
Jan  1 14:15:56.625: INFO: Number of nodes with available pods: 0
Jan  1 14:15:56.625: INFO: Number of running nodes: 0, number of available pods: 0
Jan  1 14:15:56.630: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3843/daemonsets","resourceVersion":"18904241"},"items":null}

Jan  1 14:15:56.633: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3843/pods","resourceVersion":"18904241"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 14:15:56.647: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-3843" for this suite.
Jan  1 14:16:02.672: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 14:16:02.833: INFO: namespace daemonsets-3843 deletion completed in 6.178106877s

• [SLOW TEST:34.165 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Guestbook application 
  should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 14:16:02.834: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating all guestbook components
Jan  1 14:16:02.962: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-slave
  labels:
    app: redis
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: redis
    role: slave
    tier: backend

Jan  1 14:16:02.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-581'
Jan  1 14:16:05.490: INFO: stderr: ""
Jan  1 14:16:05.490: INFO: stdout: "service/redis-slave created\n"
Jan  1 14:16:05.491: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-master
  labels:
    app: redis
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: redis
    role: master
    tier: backend

Jan  1 14:16:05.492: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-581'
Jan  1 14:16:06.092: INFO: stderr: ""
Jan  1 14:16:06.092: INFO: stdout: "service/redis-master created\n"
Jan  1 14:16:06.094: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Jan  1 14:16:06.094: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-581'
Jan  1 14:16:06.650: INFO: stderr: ""
Jan  1 14:16:06.651: INFO: stdout: "service/frontend created\n"
Jan  1 14:16:06.652: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  selector:
    matchLabels:
      app: guestbook
      tier: frontend
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: php-redis
        image: gcr.io/google-samples/gb-frontend:v6
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access environment variables to find service host
          # info, comment out the 'value: dns' line above, and uncomment the
          # line below:
          # value: env
        ports:
        - containerPort: 80

Jan  1 14:16:06.653: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-581'
Jan  1 14:16:07.091: INFO: stderr: ""
Jan  1 14:16:07.091: INFO: stdout: "deployment.apps/frontend created\n"
Jan  1 14:16:07.092: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-master
spec:
  replicas: 1
  selector:
    matchLabels:
      app: redis
      role: master
      tier: backend
  template:
    metadata:
      labels:
        app: redis
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: gcr.io/kubernetes-e2e-test-images/redis:1.0
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Jan  1 14:16:07.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-581'
Jan  1 14:16:07.479: INFO: stderr: ""
Jan  1 14:16:07.479: INFO: stdout: "deployment.apps/redis-master created\n"
Jan  1 14:16:07.480: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-slave
spec:
  replicas: 2
  selector:
    matchLabels:
      app: redis
      role: slave
      tier: backend
  template:
    metadata:
      labels:
        app: redis
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/google-samples/gb-redisslave:v3
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access an environment variable to find the master
          # service's host, comment out the 'value: dns' line above, and
          # uncomment the line below:
          # value: env
        ports:
        - containerPort: 6379

Jan  1 14:16:07.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-581'
Jan  1 14:16:09.546: INFO: stderr: ""
Jan  1 14:16:09.546: INFO: stdout: "deployment.apps/redis-slave created\n"
STEP: validating guestbook app
Jan  1 14:16:09.546: INFO: Waiting for all frontend pods to be Running.
Jan  1 14:16:29.599: INFO: Waiting for frontend to serve content.
Jan  1 14:16:33.358: INFO: Trying to add a new entry to the guestbook.
Jan  1 14:16:33.477: INFO: Verifying that added entry can be retrieved.
STEP: using delete to clean up resources
Jan  1 14:16:33.526: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-581'
Jan  1 14:16:33.757: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  1 14:16:33.757: INFO: stdout: "service \"redis-slave\" force deleted\n"
STEP: using delete to clean up resources
Jan  1 14:16:33.758: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-581'
Jan  1 14:16:34.014: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  1 14:16:34.014: INFO: stdout: "service \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Jan  1 14:16:34.017: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-581'
Jan  1 14:16:34.287: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  1 14:16:34.287: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Jan  1 14:16:34.289: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-581'
Jan  1 14:16:34.380: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  1 14:16:34.381: INFO: stdout: "deployment.apps \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Jan  1 14:16:34.382: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-581'
Jan  1 14:16:34.502: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  1 14:16:34.502: INFO: stdout: "deployment.apps \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Jan  1 14:16:34.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-581'
Jan  1 14:16:34.651: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  1 14:16:34.651: INFO: stdout: "deployment.apps \"redis-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 14:16:34.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-581" for this suite.
Jan  1 14:17:18.801: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 14:17:18.895: INFO: namespace kubectl-581 deletion completed in 44.203727757s

• [SLOW TEST:76.062 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Guestbook application
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create and stop a working application  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 14:17:18.896: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 14:17:27.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-4796" for this suite.
Jan  1 14:17:33.179: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 14:17:33.293: INFO: namespace kubelet-test-4796 deletion completed in 6.18075032s

• [SLOW TEST:14.397 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 14:17:33.293: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 14:17:41.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-7821" for this suite.
Jan  1 14:18:21.633: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 14:18:21.792: INFO: namespace kubelet-test-7821 deletion completed in 40.210593233s

• [SLOW TEST:48.499 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a read only busybox container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187
    should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 14:18:21.794: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1292
STEP: creating an rc
Jan  1 14:18:21.907: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3442'
Jan  1 14:18:22.498: INFO: stderr: ""
Jan  1 14:18:22.498: INFO: stdout: "replicationcontroller/redis-master created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Waiting for Redis master to start.
Jan  1 14:18:23.511: INFO: Selector matched 1 pods for map[app:redis]
Jan  1 14:18:23.511: INFO: Found 0 / 1
Jan  1 14:18:24.516: INFO: Selector matched 1 pods for map[app:redis]
Jan  1 14:18:24.517: INFO: Found 0 / 1
Jan  1 14:18:25.511: INFO: Selector matched 1 pods for map[app:redis]
Jan  1 14:18:25.512: INFO: Found 0 / 1
Jan  1 14:18:26.517: INFO: Selector matched 1 pods for map[app:redis]
Jan  1 14:18:26.518: INFO: Found 0 / 1
Jan  1 14:18:27.512: INFO: Selector matched 1 pods for map[app:redis]
Jan  1 14:18:27.512: INFO: Found 0 / 1
Jan  1 14:18:28.525: INFO: Selector matched 1 pods for map[app:redis]
Jan  1 14:18:28.525: INFO: Found 0 / 1
Jan  1 14:18:29.524: INFO: Selector matched 1 pods for map[app:redis]
Jan  1 14:18:29.524: INFO: Found 0 / 1
Jan  1 14:18:30.514: INFO: Selector matched 1 pods for map[app:redis]
Jan  1 14:18:30.514: INFO: Found 1 / 1
Jan  1 14:18:30.514: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Jan  1 14:18:30.523: INFO: Selector matched 1 pods for map[app:redis]
Jan  1 14:18:30.524: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
STEP: checking for a matching strings
Jan  1 14:18:30.524: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-9955s redis-master --namespace=kubectl-3442'
Jan  1 14:18:30.708: INFO: stderr: ""
Jan  1 14:18:30.708: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 01 Jan 14:18:29.340 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 01 Jan 14:18:29.340 # Server started, Redis version 3.2.12\n1:M 01 Jan 14:18:29.341 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 01 Jan 14:18:29.341 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log lines
Jan  1 14:18:30.708: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-9955s redis-master --namespace=kubectl-3442 --tail=1'
Jan  1 14:18:30.844: INFO: stderr: ""
Jan  1 14:18:30.844: INFO: stdout: "1:M 01 Jan 14:18:29.341 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log bytes
Jan  1 14:18:30.844: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-9955s redis-master --namespace=kubectl-3442 --limit-bytes=1'
Jan  1 14:18:31.043: INFO: stderr: ""
Jan  1 14:18:31.043: INFO: stdout: " "
STEP: exposing timestamps
Jan  1 14:18:31.044: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-9955s redis-master --namespace=kubectl-3442 --tail=1 --timestamps'
Jan  1 14:18:31.151: INFO: stderr: ""
Jan  1 14:18:31.151: INFO: stdout: "2020-01-01T14:18:29.342536422Z 1:M 01 Jan 14:18:29.341 * The server is now ready to accept connections on port 6379\n"
STEP: restricting to a time range
Jan  1 14:18:33.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-9955s redis-master --namespace=kubectl-3442 --since=1s'
Jan  1 14:18:33.878: INFO: stderr: ""
Jan  1 14:18:33.878: INFO: stdout: ""
Jan  1 14:18:33.878: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-9955s redis-master --namespace=kubectl-3442 --since=24h'
Jan  1 14:18:34.142: INFO: stderr: ""
Jan  1 14:18:34.142: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 01 Jan 14:18:29.340 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 01 Jan 14:18:29.340 # Server started, Redis version 3.2.12\n1:M 01 Jan 14:18:29.341 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 01 Jan 14:18:29.341 * The server is now ready to accept connections on port 6379\n"
[AfterEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298
STEP: using delete to clean up resources
Jan  1 14:18:34.143: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3442'
Jan  1 14:18:34.251: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  1 14:18:34.252: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n"
Jan  1 14:18:34.252: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=kubectl-3442'
Jan  1 14:18:34.352: INFO: stderr: "No resources found.\n"
Jan  1 14:18:34.352: INFO: stdout: ""
Jan  1 14:18:34.353: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=kubectl-3442 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan  1 14:18:34.477: INFO: stderr: ""
Jan  1 14:18:34.477: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 14:18:34.477: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3442" for this suite.
Jan  1 14:18:56.547: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 14:18:56.715: INFO: namespace kubectl-3442 deletion completed in 22.211615738s

• [SLOW TEST:34.922 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be able to retrieve and filter logs  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 14:18:56.719: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-010f6acc-c132-42e8-9b96-fa4a16ec40fc
STEP: Creating a pod to test consume secrets
Jan  1 14:18:56.821: INFO: Waiting up to 5m0s for pod "pod-secrets-8e36203f-130f-4ea4-9fc5-e6e14ea2cb07" in namespace "secrets-9179" to be "success or failure"
Jan  1 14:18:56.888: INFO: Pod "pod-secrets-8e36203f-130f-4ea4-9fc5-e6e14ea2cb07": Phase="Pending", Reason="", readiness=false. Elapsed: 66.376409ms
Jan  1 14:18:58.898: INFO: Pod "pod-secrets-8e36203f-130f-4ea4-9fc5-e6e14ea2cb07": Phase="Pending", Reason="", readiness=false. Elapsed: 2.076085625s
Jan  1 14:19:00.908: INFO: Pod "pod-secrets-8e36203f-130f-4ea4-9fc5-e6e14ea2cb07": Phase="Pending", Reason="", readiness=false. Elapsed: 4.086875916s
Jan  1 14:19:02.949: INFO: Pod "pod-secrets-8e36203f-130f-4ea4-9fc5-e6e14ea2cb07": Phase="Pending", Reason="", readiness=false. Elapsed: 6.127982968s
Jan  1 14:19:04.961: INFO: Pod "pod-secrets-8e36203f-130f-4ea4-9fc5-e6e14ea2cb07": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.139238341s
STEP: Saw pod success
Jan  1 14:19:04.961: INFO: Pod "pod-secrets-8e36203f-130f-4ea4-9fc5-e6e14ea2cb07" satisfied condition "success or failure"
Jan  1 14:19:04.966: INFO: Trying to get logs from node iruya-node pod pod-secrets-8e36203f-130f-4ea4-9fc5-e6e14ea2cb07 container secret-env-test: 
STEP: delete the pod
Jan  1 14:19:05.030: INFO: Waiting for pod pod-secrets-8e36203f-130f-4ea4-9fc5-e6e14ea2cb07 to disappear
Jan  1 14:19:05.057: INFO: Pod pod-secrets-8e36203f-130f-4ea4-9fc5-e6e14ea2cb07 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 14:19:05.057: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9179" for this suite.
Jan  1 14:19:11.104: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 14:19:11.227: INFO: namespace secrets-9179 deletion completed in 6.161769018s

• [SLOW TEST:14.508 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 14:19:11.228: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there are no pods in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 14:19:37.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-7975" for this suite.
Jan  1 14:19:43.642: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 14:19:43.788: INFO: namespace namespaces-7975 deletion completed in 6.176434124s
STEP: Destroying namespace "nsdeletetest-3889" for this suite.
Jan  1 14:19:43.795: INFO: Namespace nsdeletetest-3889 was already deleted
STEP: Destroying namespace "nsdeletetest-3144" for this suite.
Jan  1 14:19:49.899: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 14:19:50.059: INFO: namespace nsdeletetest-3144 deletion completed in 6.264762392s

• [SLOW TEST:38.832 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 14:19:50.060: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Jan  1 14:19:50.572: INFO: Waiting up to 5m0s for pod "downward-api-bebd342d-730c-4dbc-8dee-b3f23afc5ef2" in namespace "downward-api-4324" to be "success or failure"
Jan  1 14:19:50.589: INFO: Pod "downward-api-bebd342d-730c-4dbc-8dee-b3f23afc5ef2": Phase="Pending", Reason="", readiness=false. Elapsed: 16.401803ms
Jan  1 14:19:52.608: INFO: Pod "downward-api-bebd342d-730c-4dbc-8dee-b3f23afc5ef2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035807196s
Jan  1 14:19:54.630: INFO: Pod "downward-api-bebd342d-730c-4dbc-8dee-b3f23afc5ef2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.05758496s
Jan  1 14:19:56.649: INFO: Pod "downward-api-bebd342d-730c-4dbc-8dee-b3f23afc5ef2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.07609764s
Jan  1 14:19:58.660: INFO: Pod "downward-api-bebd342d-730c-4dbc-8dee-b3f23afc5ef2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.087436349s
STEP: Saw pod success
Jan  1 14:19:58.660: INFO: Pod "downward-api-bebd342d-730c-4dbc-8dee-b3f23afc5ef2" satisfied condition "success or failure"
Jan  1 14:19:58.690: INFO: Trying to get logs from node iruya-node pod downward-api-bebd342d-730c-4dbc-8dee-b3f23afc5ef2 container dapi-container: 
STEP: delete the pod
Jan  1 14:19:58.779: INFO: Waiting for pod downward-api-bebd342d-730c-4dbc-8dee-b3f23afc5ef2 to disappear
Jan  1 14:19:58.786: INFO: Pod downward-api-bebd342d-730c-4dbc-8dee-b3f23afc5ef2 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 14:19:58.788: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4324" for this suite.
Jan  1 14:20:04.866: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 14:20:05.049: INFO: namespace downward-api-4324 deletion completed in 6.224616445s

• [SLOW TEST:14.989 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 14:20:05.054: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Jan  1 14:20:25.316: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7659 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  1 14:20:25.316: INFO: >>> kubeConfig: /root/.kube/config
Jan  1 14:20:25.676: INFO: Exec stderr: ""
Jan  1 14:20:25.677: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7659 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  1 14:20:25.677: INFO: >>> kubeConfig: /root/.kube/config
Jan  1 14:20:26.014: INFO: Exec stderr: ""
Jan  1 14:20:26.015: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7659 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  1 14:20:26.015: INFO: >>> kubeConfig: /root/.kube/config
Jan  1 14:20:26.454: INFO: Exec stderr: ""
Jan  1 14:20:26.455: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7659 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  1 14:20:26.455: INFO: >>> kubeConfig: /root/.kube/config
Jan  1 14:20:26.934: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Jan  1 14:20:26.934: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7659 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  1 14:20:26.935: INFO: >>> kubeConfig: /root/.kube/config
Jan  1 14:20:27.265: INFO: Exec stderr: ""
Jan  1 14:20:27.265: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7659 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  1 14:20:27.266: INFO: >>> kubeConfig: /root/.kube/config
Jan  1 14:20:27.701: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Jan  1 14:20:27.701: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7659 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  1 14:20:27.702: INFO: >>> kubeConfig: /root/.kube/config
Jan  1 14:20:28.144: INFO: Exec stderr: ""
Jan  1 14:20:28.145: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7659 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  1 14:20:28.146: INFO: >>> kubeConfig: /root/.kube/config
Jan  1 14:20:28.534: INFO: Exec stderr: ""
Jan  1 14:20:28.535: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7659 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  1 14:20:28.535: INFO: >>> kubeConfig: /root/.kube/config
Jan  1 14:20:28.940: INFO: Exec stderr: ""
Jan  1 14:20:28.940: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7659 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  1 14:20:28.941: INFO: >>> kubeConfig: /root/.kube/config
Jan  1 14:20:29.297: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 14:20:29.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-kubelet-etc-hosts-7659" for this suite.
Jan  1 14:21:21.340: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 14:21:21.490: INFO: namespace e2e-kubelet-etc-hosts-7659 deletion completed in 52.176456147s

• [SLOW TEST:76.436 seconds]
[k8s.io] KubeletManagedEtcHosts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 14:21:21.492: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svc-latency
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating replication controller svc-latency-rc in namespace svc-latency-6478
I0101 14:21:21.697092       8 runners.go:180] Created replication controller with name: svc-latency-rc, namespace: svc-latency-6478, replica count: 1
I0101 14:21:22.748340       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0101 14:21:23.749285       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0101 14:21:24.751164       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0101 14:21:25.752143       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0101 14:21:26.752670       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0101 14:21:27.753699       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0101 14:21:28.754894       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0101 14:21:29.755931       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0101 14:21:30.756523       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jan  1 14:21:30.925: INFO: Created: latency-svc-4zd8v
Jan  1 14:21:30.993: INFO: Got endpoints: latency-svc-4zd8v [136.772921ms]
Jan  1 14:21:31.061: INFO: Created: latency-svc-ww9xr
Jan  1 14:21:31.067: INFO: Got endpoints: latency-svc-ww9xr [72.063886ms]
Jan  1 14:21:31.184: INFO: Created: latency-svc-45cdh
Jan  1 14:21:31.202: INFO: Got endpoints: latency-svc-45cdh [206.809112ms]
Jan  1 14:21:31.274: INFO: Created: latency-svc-cmnmp
Jan  1 14:21:31.337: INFO: Got endpoints: latency-svc-cmnmp [339.691583ms]
Jan  1 14:21:31.378: INFO: Created: latency-svc-47gjz
Jan  1 14:21:31.381: INFO: Got endpoints: latency-svc-47gjz [384.712699ms]
Jan  1 14:21:31.433: INFO: Created: latency-svc-mk7pk
Jan  1 14:21:31.543: INFO: Got endpoints: latency-svc-mk7pk [545.478771ms]
Jan  1 14:21:31.577: INFO: Created: latency-svc-69hgp
Jan  1 14:21:31.601: INFO: Got endpoints: latency-svc-69hgp [603.849718ms]
Jan  1 14:21:31.612: INFO: Created: latency-svc-t8bp2
Jan  1 14:21:31.621: INFO: Got endpoints: latency-svc-t8bp2 [624.68722ms]
Jan  1 14:21:31.724: INFO: Created: latency-svc-gvbx4
Jan  1 14:21:31.730: INFO: Got endpoints: latency-svc-gvbx4 [735.031233ms]
Jan  1 14:21:31.766: INFO: Created: latency-svc-ptp5j
Jan  1 14:21:31.793: INFO: Got endpoints: latency-svc-ptp5j [797.735405ms]
Jan  1 14:21:31.891: INFO: Created: latency-svc-6wzqt
Jan  1 14:21:31.943: INFO: Got endpoints: latency-svc-6wzqt [948.562348ms]
Jan  1 14:21:31.948: INFO: Created: latency-svc-ch4d2
Jan  1 14:21:31.965: INFO: Got endpoints: latency-svc-ch4d2 [967.612759ms]
Jan  1 14:21:32.145: INFO: Created: latency-svc-lsqdg
Jan  1 14:21:32.177: INFO: Got endpoints: latency-svc-lsqdg [1.17948269s]
Jan  1 14:21:32.243: INFO: Created: latency-svc-szshj
Jan  1 14:21:32.370: INFO: Got endpoints: latency-svc-szshj [1.372649403s]
Jan  1 14:21:32.387: INFO: Created: latency-svc-52f9g
Jan  1 14:21:32.393: INFO: Got endpoints: latency-svc-52f9g [215.63427ms]
Jan  1 14:21:32.434: INFO: Created: latency-svc-rc6qn
Jan  1 14:21:32.455: INFO: Got endpoints: latency-svc-rc6qn [1.456222955s]
Jan  1 14:21:32.575: INFO: Created: latency-svc-rn67s
Jan  1 14:21:32.591: INFO: Got endpoints: latency-svc-rn67s [1.595387032s]
Jan  1 14:21:32.670: INFO: Created: latency-svc-kwsb9
Jan  1 14:21:32.671: INFO: Got endpoints: latency-svc-kwsb9 [1.603543209s]
Jan  1 14:21:32.777: INFO: Created: latency-svc-fq857
Jan  1 14:21:32.783: INFO: Got endpoints: latency-svc-fq857 [1.580126118s]
Jan  1 14:21:32.860: INFO: Created: latency-svc-rkfhg
Jan  1 14:21:32.974: INFO: Got endpoints: latency-svc-rkfhg [1.636704992s]
Jan  1 14:21:32.982: INFO: Created: latency-svc-dqmnq
Jan  1 14:21:33.008: INFO: Got endpoints: latency-svc-dqmnq [1.6274029s]
Jan  1 14:21:33.054: INFO: Created: latency-svc-4d6cz
Jan  1 14:21:33.165: INFO: Got endpoints: latency-svc-4d6cz [1.621799348s]
Jan  1 14:21:33.194: INFO: Created: latency-svc-pzt7x
Jan  1 14:21:33.198: INFO: Got endpoints: latency-svc-pzt7x [1.596931656s]
Jan  1 14:21:33.243: INFO: Created: latency-svc-hlcvq
Jan  1 14:21:33.253: INFO: Got endpoints: latency-svc-hlcvq [1.6324332s]
Jan  1 14:21:33.398: INFO: Created: latency-svc-24g54
Jan  1 14:21:33.411: INFO: Got endpoints: latency-svc-24g54 [1.680539708s]
Jan  1 14:21:33.446: INFO: Created: latency-svc-sj5mp
Jan  1 14:21:33.513: INFO: Got endpoints: latency-svc-sj5mp [1.719808461s]
Jan  1 14:21:33.523: INFO: Created: latency-svc-pfbmm
Jan  1 14:21:33.537: INFO: Got endpoints: latency-svc-pfbmm [1.594434116s]
Jan  1 14:21:33.573: INFO: Created: latency-svc-4cdjt
Jan  1 14:21:33.590: INFO: Got endpoints: latency-svc-4cdjt [1.624973631s]
Jan  1 14:21:33.689: INFO: Created: latency-svc-xmj89
Jan  1 14:21:33.692: INFO: Got endpoints: latency-svc-xmj89 [1.321794182s]
Jan  1 14:21:33.736: INFO: Created: latency-svc-ncb7p
Jan  1 14:21:33.755: INFO: Got endpoints: latency-svc-ncb7p [1.361670893s]
Jan  1 14:21:33.867: INFO: Created: latency-svc-vk9dx
Jan  1 14:21:33.882: INFO: Got endpoints: latency-svc-vk9dx [1.425857193s]
Jan  1 14:21:33.939: INFO: Created: latency-svc-xcjgx
Jan  1 14:21:34.030: INFO: Got endpoints: latency-svc-xcjgx [1.43821495s]
Jan  1 14:21:34.040: INFO: Created: latency-svc-nnlgq
Jan  1 14:21:34.049: INFO: Got endpoints: latency-svc-nnlgq [1.377899911s]
Jan  1 14:21:34.106: INFO: Created: latency-svc-45ldj
Jan  1 14:21:34.202: INFO: Got endpoints: latency-svc-45ldj [1.41924855s]
Jan  1 14:21:34.290: INFO: Created: latency-svc-257q9
Jan  1 14:21:34.394: INFO: Got endpoints: latency-svc-257q9 [1.419213611s]
Jan  1 14:21:34.395: INFO: Created: latency-svc-fbh5p
Jan  1 14:21:34.411: INFO: Got endpoints: latency-svc-fbh5p [1.402124809s]
Jan  1 14:21:34.545: INFO: Created: latency-svc-fhlqc
Jan  1 14:21:34.545: INFO: Got endpoints: latency-svc-fhlqc [1.379410553s]
Jan  1 14:21:34.617: INFO: Created: latency-svc-h487s
Jan  1 14:21:34.617: INFO: Got endpoints: latency-svc-h487s [1.418645871s]
Jan  1 14:21:34.715: INFO: Created: latency-svc-hms6b
Jan  1 14:21:34.737: INFO: Got endpoints: latency-svc-hms6b [1.484000905s]
Jan  1 14:21:34.763: INFO: Created: latency-svc-9csnt
Jan  1 14:21:34.770: INFO: Got endpoints: latency-svc-9csnt [1.358255073s]
Jan  1 14:21:34.880: INFO: Created: latency-svc-cqnxb
Jan  1 14:21:34.890: INFO: Got endpoints: latency-svc-cqnxb [1.376753372s]
Jan  1 14:21:34.926: INFO: Created: latency-svc-n972r
Jan  1 14:21:34.942: INFO: Got endpoints: latency-svc-n972r [1.404347942s]
Jan  1 14:21:35.048: INFO: Created: latency-svc-kmjnq
Jan  1 14:21:35.056: INFO: Got endpoints: latency-svc-kmjnq [1.465729262s]
Jan  1 14:21:35.222: INFO: Created: latency-svc-8m24g
Jan  1 14:21:35.265: INFO: Got endpoints: latency-svc-8m24g [1.571995496s]
Jan  1 14:21:35.271: INFO: Created: latency-svc-qx5rf
Jan  1 14:21:35.281: INFO: Got endpoints: latency-svc-qx5rf [1.525626561s]
Jan  1 14:21:35.396: INFO: Created: latency-svc-d5wlk
Jan  1 14:21:35.413: INFO: Got endpoints: latency-svc-d5wlk [1.530502155s]
Jan  1 14:21:35.471: INFO: Created: latency-svc-nm5cl
Jan  1 14:21:35.563: INFO: Got endpoints: latency-svc-nm5cl [1.514212889s]
Jan  1 14:21:35.587: INFO: Created: latency-svc-gjcc9
Jan  1 14:21:35.595: INFO: Got endpoints: latency-svc-gjcc9 [1.563615173s]
Jan  1 14:21:35.629: INFO: Created: latency-svc-qznbd
Jan  1 14:21:35.641: INFO: Got endpoints: latency-svc-qznbd [1.438393213s]
Jan  1 14:21:35.737: INFO: Created: latency-svc-bfg76
Jan  1 14:21:35.746: INFO: Got endpoints: latency-svc-bfg76 [1.35170054s]
Jan  1 14:21:35.793: INFO: Created: latency-svc-42f74
Jan  1 14:21:35.799: INFO: Got endpoints: latency-svc-42f74 [1.387855772s]
Jan  1 14:21:35.922: INFO: Created: latency-svc-fttt2
Jan  1 14:21:35.923: INFO: Got endpoints: latency-svc-fttt2 [1.377619971s]
Jan  1 14:21:35.975: INFO: Created: latency-svc-6x54w
Jan  1 14:21:35.976: INFO: Got endpoints: latency-svc-6x54w [1.358356699s]
Jan  1 14:21:36.082: INFO: Created: latency-svc-9jgkq
Jan  1 14:21:36.095: INFO: Got endpoints: latency-svc-9jgkq [1.35692324s]
Jan  1 14:21:36.262: INFO: Created: latency-svc-q2z5g
Jan  1 14:21:36.265: INFO: Got endpoints: latency-svc-q2z5g [1.49558369s]
Jan  1 14:21:36.351: INFO: Created: latency-svc-gfkcm
Jan  1 14:21:36.558: INFO: Got endpoints: latency-svc-gfkcm [1.667175995s]
Jan  1 14:21:36.611: INFO: Created: latency-svc-pwj9j
Jan  1 14:21:36.612: INFO: Got endpoints: latency-svc-pwj9j [1.669361794s]
Jan  1 14:21:36.679: INFO: Created: latency-svc-9pnpx
Jan  1 14:21:36.685: INFO: Got endpoints: latency-svc-9pnpx [1.629078457s]
Jan  1 14:21:36.745: INFO: Created: latency-svc-c99nb
Jan  1 14:21:36.750: INFO: Got endpoints: latency-svc-c99nb [1.484861134s]
Jan  1 14:21:36.852: INFO: Created: latency-svc-zfkcz
Jan  1 14:21:36.867: INFO: Got endpoints: latency-svc-zfkcz [1.58584977s]
Jan  1 14:21:36.947: INFO: Created: latency-svc-cfdr4
Jan  1 14:21:36.947: INFO: Got endpoints: latency-svc-cfdr4 [1.533815675s]
Jan  1 14:21:37.082: INFO: Created: latency-svc-l2s95
Jan  1 14:21:37.089: INFO: Got endpoints: latency-svc-l2s95 [1.525599614s]
Jan  1 14:21:37.225: INFO: Created: latency-svc-qsmkz
Jan  1 14:21:37.240: INFO: Got endpoints: latency-svc-qsmkz [1.644638894s]
Jan  1 14:21:37.313: INFO: Created: latency-svc-ctp8d
Jan  1 14:21:37.427: INFO: Got endpoints: latency-svc-ctp8d [1.786182468s]
Jan  1 14:21:37.429: INFO: Created: latency-svc-x2d28
Jan  1 14:21:37.437: INFO: Got endpoints: latency-svc-x2d28 [1.690637057s]
Jan  1 14:21:37.503: INFO: Created: latency-svc-rv4l8
Jan  1 14:21:37.509: INFO: Got endpoints: latency-svc-rv4l8 [1.70986393s]
Jan  1 14:21:37.726: INFO: Created: latency-svc-cqhjq
Jan  1 14:21:37.738: INFO: Got endpoints: latency-svc-cqhjq [1.815263824s]
Jan  1 14:21:37.783: INFO: Created: latency-svc-cps7c
Jan  1 14:21:37.787: INFO: Got endpoints: latency-svc-cps7c [1.810867785s]
Jan  1 14:21:37.924: INFO: Created: latency-svc-jkj2h
Jan  1 14:21:37.938: INFO: Got endpoints: latency-svc-jkj2h [1.842393259s]
Jan  1 14:21:37.991: INFO: Created: latency-svc-zpxgw
Jan  1 14:21:38.130: INFO: Got endpoints: latency-svc-zpxgw [1.864326541s]
Jan  1 14:21:38.188: INFO: Created: latency-svc-5lrlx
Jan  1 14:21:38.191: INFO: Got endpoints: latency-svc-5lrlx [1.632234194s]
Jan  1 14:21:38.308: INFO: Created: latency-svc-bglxj
Jan  1 14:21:38.332: INFO: Got endpoints: latency-svc-bglxj [1.720211633s]
Jan  1 14:21:38.390: INFO: Created: latency-svc-j6sc9
Jan  1 14:21:38.405: INFO: Got endpoints: latency-svc-j6sc9 [1.719586093s]
Jan  1 14:21:38.534: INFO: Created: latency-svc-sncr4
Jan  1 14:21:38.548: INFO: Got endpoints: latency-svc-sncr4 [1.797318696s]
Jan  1 14:21:38.744: INFO: Created: latency-svc-zxzrh
Jan  1 14:21:38.762: INFO: Got endpoints: latency-svc-zxzrh [1.894151918s]
Jan  1 14:21:38.809: INFO: Created: latency-svc-vft8c
Jan  1 14:21:38.928: INFO: Got endpoints: latency-svc-vft8c [1.980736278s]
Jan  1 14:21:38.944: INFO: Created: latency-svc-hw2nh
Jan  1 14:21:38.945: INFO: Got endpoints: latency-svc-hw2nh [1.856341328s]
Jan  1 14:21:38.994: INFO: Created: latency-svc-l8wdw
Jan  1 14:21:39.000: INFO: Got endpoints: latency-svc-l8wdw [1.759528761s]
Jan  1 14:21:39.151: INFO: Created: latency-svc-26744
Jan  1 14:21:39.153: INFO: Got endpoints: latency-svc-26744 [1.724709041s]
Jan  1 14:21:39.185: INFO: Created: latency-svc-llvkf
Jan  1 14:21:39.189: INFO: Got endpoints: latency-svc-llvkf [1.751955965s]
Jan  1 14:21:39.246: INFO: Created: latency-svc-c2rzz
Jan  1 14:21:39.358: INFO: Got endpoints: latency-svc-c2rzz [1.848716463s]
Jan  1 14:21:39.367: INFO: Created: latency-svc-l854k
Jan  1 14:21:39.384: INFO: Got endpoints: latency-svc-l854k [1.645641443s]
Jan  1 14:21:39.653: INFO: Created: latency-svc-x9svx
Jan  1 14:21:39.667: INFO: Got endpoints: latency-svc-x9svx [1.880368121s]
Jan  1 14:21:39.702: INFO: Created: latency-svc-hjddh
Jan  1 14:21:39.710: INFO: Got endpoints: latency-svc-hjddh [1.771978983s]
Jan  1 14:21:39.848: INFO: Created: latency-svc-pbxgm
Jan  1 14:21:39.852: INFO: Got endpoints: latency-svc-pbxgm [1.720932653s]
Jan  1 14:21:39.906: INFO: Created: latency-svc-vrkp2
Jan  1 14:21:40.038: INFO: Got endpoints: latency-svc-vrkp2 [1.847021092s]
Jan  1 14:21:40.043: INFO: Created: latency-svc-ktwrq
Jan  1 14:21:40.075: INFO: Got endpoints: latency-svc-ktwrq [1.742645937s]
Jan  1 14:21:40.113: INFO: Created: latency-svc-9jnpb
Jan  1 14:21:40.123: INFO: Got endpoints: latency-svc-9jnpb [1.717954485s]
Jan  1 14:21:40.249: INFO: Created: latency-svc-w8wtq
Jan  1 14:21:40.266: INFO: Got endpoints: latency-svc-w8wtq [1.717366182s]
Jan  1 14:21:40.328: INFO: Created: latency-svc-qj7sj
Jan  1 14:21:40.454: INFO: Created: latency-svc-pvcqv
Jan  1 14:21:40.464: INFO: Got endpoints: latency-svc-qj7sj [1.701233041s]
Jan  1 14:21:40.477: INFO: Got endpoints: latency-svc-pvcqv [1.548128768s]
Jan  1 14:21:40.537: INFO: Created: latency-svc-gnwn7
Jan  1 14:21:40.659: INFO: Got endpoints: latency-svc-gnwn7 [1.713397596s]
Jan  1 14:21:40.673: INFO: Created: latency-svc-5v67n
Jan  1 14:21:40.702: INFO: Got endpoints: latency-svc-5v67n [1.701684745s]
Jan  1 14:21:40.721: INFO: Created: latency-svc-vmhz2
Jan  1 14:21:40.833: INFO: Got endpoints: latency-svc-vmhz2 [1.680496854s]
Jan  1 14:21:40.883: INFO: Created: latency-svc-z98z8
Jan  1 14:21:40.920: INFO: Got endpoints: latency-svc-z98z8 [1.731118258s]
Jan  1 14:21:41.036: INFO: Created: latency-svc-t4rr4
Jan  1 14:21:41.050: INFO: Got endpoints: latency-svc-t4rr4 [1.69105548s]
Jan  1 14:21:41.119: INFO: Created: latency-svc-5sqxw
Jan  1 14:21:41.123: INFO: Got endpoints: latency-svc-5sqxw [1.738259621s]
Jan  1 14:21:41.227: INFO: Created: latency-svc-z7z7z
Jan  1 14:21:41.235: INFO: Got endpoints: latency-svc-z7z7z [1.566998478s]
Jan  1 14:21:41.275: INFO: Created: latency-svc-rds79
Jan  1 14:21:41.294: INFO: Got endpoints: latency-svc-rds79 [1.583473595s]
Jan  1 14:21:41.315: INFO: Created: latency-svc-hgg9p
Jan  1 14:21:41.482: INFO: Got endpoints: latency-svc-hgg9p [1.62977475s]
Jan  1 14:21:41.496: INFO: Created: latency-svc-cthcj
Jan  1 14:21:41.506: INFO: Got endpoints: latency-svc-cthcj [1.468430591s]
Jan  1 14:21:41.543: INFO: Created: latency-svc-f62qq
Jan  1 14:21:41.550: INFO: Got endpoints: latency-svc-f62qq [1.474352156s]
Jan  1 14:21:41.696: INFO: Created: latency-svc-9mvfm
Jan  1 14:21:41.704: INFO: Got endpoints: latency-svc-9mvfm [1.580500975s]
Jan  1 14:21:41.759: INFO: Created: latency-svc-ztxlm
Jan  1 14:21:41.787: INFO: Got endpoints: latency-svc-ztxlm [1.521295578s]
Jan  1 14:21:41.952: INFO: Created: latency-svc-vqsqz
Jan  1 14:21:41.984: INFO: Got endpoints: latency-svc-vqsqz [1.520020876s]
Jan  1 14:21:42.015: INFO: Created: latency-svc-xdshn
Jan  1 14:21:42.023: INFO: Got endpoints: latency-svc-xdshn [1.545404346s]
Jan  1 14:21:42.274: INFO: Created: latency-svc-b7xcp
Jan  1 14:21:42.293: INFO: Got endpoints: latency-svc-b7xcp [1.633364009s]
Jan  1 14:21:42.384: INFO: Created: latency-svc-wk69s
Jan  1 14:21:42.590: INFO: Got endpoints: latency-svc-wk69s [1.887641456s]
Jan  1 14:21:42.608: INFO: Created: latency-svc-rqs74
Jan  1 14:21:42.635: INFO: Got endpoints: latency-svc-rqs74 [1.801579242s]
Jan  1 14:21:42.666: INFO: Created: latency-svc-bbqv9
Jan  1 14:21:42.689: INFO: Got endpoints: latency-svc-bbqv9 [1.767989641s]
Jan  1 14:21:42.851: INFO: Created: latency-svc-zvbww
Jan  1 14:21:42.870: INFO: Got endpoints: latency-svc-zvbww [1.819865295s]
Jan  1 14:21:42.918: INFO: Created: latency-svc-d57q8
Jan  1 14:21:42.940: INFO: Got endpoints: latency-svc-d57q8 [1.816882282s]
Jan  1 14:21:43.114: INFO: Created: latency-svc-5gctv
Jan  1 14:21:43.115: INFO: Got endpoints: latency-svc-5gctv [1.880220596s]
Jan  1 14:21:43.144: INFO: Created: latency-svc-8bg7k
Jan  1 14:21:43.151: INFO: Got endpoints: latency-svc-8bg7k [1.856792513s]
Jan  1 14:21:43.321: INFO: Created: latency-svc-28h45
Jan  1 14:21:43.332: INFO: Got endpoints: latency-svc-28h45 [1.850177628s]
Jan  1 14:21:43.397: INFO: Created: latency-svc-rxvr8
Jan  1 14:21:43.406: INFO: Got endpoints: latency-svc-rxvr8 [1.899685587s]
Jan  1 14:21:43.545: INFO: Created: latency-svc-cb8wv
Jan  1 14:21:43.578: INFO: Got endpoints: latency-svc-cb8wv [2.027751249s]
Jan  1 14:21:43.620: INFO: Created: latency-svc-nmthd
Jan  1 14:21:43.760: INFO: Got endpoints: latency-svc-nmthd [2.055725358s]
Jan  1 14:21:43.781: INFO: Created: latency-svc-d6v6z
Jan  1 14:21:43.802: INFO: Got endpoints: latency-svc-d6v6z [2.014279906s]
Jan  1 14:21:43.858: INFO: Created: latency-svc-46rx5
Jan  1 14:21:43.972: INFO: Got endpoints: latency-svc-46rx5 [1.986944162s]
Jan  1 14:21:44.006: INFO: Created: latency-svc-zbp2m
Jan  1 14:21:44.018: INFO: Got endpoints: latency-svc-zbp2m [1.994706085s]
Jan  1 14:21:44.149: INFO: Created: latency-svc-hpk7k
Jan  1 14:21:44.158: INFO: Got endpoints: latency-svc-hpk7k [1.86483853s]
Jan  1 14:21:44.210: INFO: Created: latency-svc-299hz
Jan  1 14:21:44.218: INFO: Got endpoints: latency-svc-299hz [1.626794147s]
Jan  1 14:21:44.373: INFO: Created: latency-svc-nqcm4
Jan  1 14:21:44.423: INFO: Got endpoints: latency-svc-nqcm4 [1.785627573s]
Jan  1 14:21:44.441: INFO: Created: latency-svc-7xv9k
Jan  1 14:21:44.441: INFO: Got endpoints: latency-svc-7xv9k [1.751213729s]
Jan  1 14:21:44.961: INFO: Created: latency-svc-njzw7
Jan  1 14:21:44.978: INFO: Got endpoints: latency-svc-njzw7 [2.107321096s]
Jan  1 14:21:45.148: INFO: Created: latency-svc-r2tnw
Jan  1 14:21:45.167: INFO: Got endpoints: latency-svc-r2tnw [2.226612114s]
Jan  1 14:21:45.209: INFO: Created: latency-svc-6zsr9
Jan  1 14:21:45.216: INFO: Got endpoints: latency-svc-6zsr9 [2.101312175s]
Jan  1 14:21:45.246: INFO: Created: latency-svc-pr6sv
Jan  1 14:21:45.366: INFO: Got endpoints: latency-svc-pr6sv [2.215384235s]
Jan  1 14:21:45.379: INFO: Created: latency-svc-hlg4g
Jan  1 14:21:45.384: INFO: Got endpoints: latency-svc-hlg4g [2.05189632s]
Jan  1 14:21:45.432: INFO: Created: latency-svc-4br45
Jan  1 14:21:45.441: INFO: Got endpoints: latency-svc-4br45 [2.03432253s]
Jan  1 14:21:45.561: INFO: Created: latency-svc-k968l
Jan  1 14:21:45.567: INFO: Got endpoints: latency-svc-k968l [1.988232454s]
Jan  1 14:21:45.619: INFO: Created: latency-svc-vdd8m
Jan  1 14:21:45.744: INFO: Got endpoints: latency-svc-vdd8m [1.983465031s]
Jan  1 14:21:45.753: INFO: Created: latency-svc-dfctb
Jan  1 14:21:45.753: INFO: Got endpoints: latency-svc-dfctb [1.950877721s]
Jan  1 14:21:45.803: INFO: Created: latency-svc-kdxl5
Jan  1 14:21:45.821: INFO: Got endpoints: latency-svc-kdxl5 [1.84841966s]
Jan  1 14:21:45.908: INFO: Created: latency-svc-jsl8z
Jan  1 14:21:45.930: INFO: Got endpoints: latency-svc-jsl8z [1.910973824s]
Jan  1 14:21:45.971: INFO: Created: latency-svc-sh8s2
Jan  1 14:21:46.092: INFO: Got endpoints: latency-svc-sh8s2 [1.932765001s]
Jan  1 14:21:46.108: INFO: Created: latency-svc-vvgck
Jan  1 14:21:46.108: INFO: Got endpoints: latency-svc-vvgck [1.889777881s]
Jan  1 14:21:46.174: INFO: Created: latency-svc-g4f2z
Jan  1 14:21:46.308: INFO: Got endpoints: latency-svc-g4f2z [1.884599767s]
Jan  1 14:21:46.339: INFO: Created: latency-svc-zl6rq
Jan  1 14:21:46.348: INFO: Got endpoints: latency-svc-zl6rq [1.907244978s]
Jan  1 14:21:46.396: INFO: Created: latency-svc-jrphs
Jan  1 14:21:46.402: INFO: Got endpoints: latency-svc-jrphs [1.423943415s]
Jan  1 14:21:46.555: INFO: Created: latency-svc-qkh8p
Jan  1 14:21:46.565: INFO: Got endpoints: latency-svc-qkh8p [1.398484931s]
Jan  1 14:21:46.615: INFO: Created: latency-svc-pm9qr
Jan  1 14:21:46.717: INFO: Got endpoints: latency-svc-pm9qr [1.499578086s]
Jan  1 14:21:46.784: INFO: Created: latency-svc-5946m
Jan  1 14:21:46.874: INFO: Got endpoints: latency-svc-5946m [1.506839441s]
Jan  1 14:21:46.908: INFO: Created: latency-svc-xl5vp
Jan  1 14:21:46.942: INFO: Got endpoints: latency-svc-xl5vp [1.558042261s]
Jan  1 14:21:47.096: INFO: Created: latency-svc-6jmqg
Jan  1 14:21:47.190: INFO: Got endpoints: latency-svc-6jmqg [1.748776642s]
Jan  1 14:21:47.209: INFO: Created: latency-svc-wpqmg
Jan  1 14:21:47.275: INFO: Got endpoints: latency-svc-wpqmg [1.707868669s]
Jan  1 14:21:47.304: INFO: Created: latency-svc-92pgx
Jan  1 14:21:47.311: INFO: Got endpoints: latency-svc-92pgx [1.565995656s]
Jan  1 14:21:47.363: INFO: Created: latency-svc-k7vdp
Jan  1 14:21:47.444: INFO: Got endpoints: latency-svc-k7vdp [1.69107533s]
Jan  1 14:21:47.491: INFO: Created: latency-svc-k5rrr
Jan  1 14:21:47.504: INFO: Got endpoints: latency-svc-k5rrr [1.683160293s]
Jan  1 14:21:47.659: INFO: Created: latency-svc-89vhh
Jan  1 14:21:47.714: INFO: Got endpoints: latency-svc-89vhh [1.783668174s]
Jan  1 14:21:47.715: INFO: Created: latency-svc-kt8k9
Jan  1 14:21:47.736: INFO: Got endpoints: latency-svc-kt8k9 [1.642445061s]
Jan  1 14:21:47.816: INFO: Created: latency-svc-5hjrt
Jan  1 14:21:47.828: INFO: Got endpoints: latency-svc-5hjrt [1.719968493s]
Jan  1 14:21:47.872: INFO: Created: latency-svc-bxjhr
Jan  1 14:21:47.899: INFO: Got endpoints: latency-svc-bxjhr [1.590659174s]
Jan  1 14:21:47.976: INFO: Created: latency-svc-6smzn
Jan  1 14:21:47.977: INFO: Got endpoints: latency-svc-6smzn [1.628136146s]
Jan  1 14:21:48.027: INFO: Created: latency-svc-llkfd
Jan  1 14:21:48.032: INFO: Got endpoints: latency-svc-llkfd [1.629659612s]
Jan  1 14:21:48.155: INFO: Created: latency-svc-9fmgv
Jan  1 14:21:48.160: INFO: Got endpoints: latency-svc-9fmgv [1.594411789s]
Jan  1 14:21:48.197: INFO: Created: latency-svc-r8z22
Jan  1 14:21:48.218: INFO: Got endpoints: latency-svc-r8z22 [1.500459077s]
Jan  1 14:21:48.249: INFO: Created: latency-svc-ks5kc
Jan  1 14:21:48.250: INFO: Got endpoints: latency-svc-ks5kc [1.367282485s]
Jan  1 14:21:48.383: INFO: Created: latency-svc-fdplz
Jan  1 14:21:48.390: INFO: Got endpoints: latency-svc-fdplz [1.44700329s]
Jan  1 14:21:48.449: INFO: Created: latency-svc-s88hc
Jan  1 14:21:48.455: INFO: Got endpoints: latency-svc-s88hc [1.264862423s]
Jan  1 14:21:48.548: INFO: Created: latency-svc-fwdg5
Jan  1 14:21:48.569: INFO: Got endpoints: latency-svc-fwdg5 [1.293746868s]
Jan  1 14:21:48.749: INFO: Created: latency-svc-q4twv
Jan  1 14:21:48.776: INFO: Got endpoints: latency-svc-q4twv [1.4654593s]
Jan  1 14:21:48.812: INFO: Created: latency-svc-hhl2z
Jan  1 14:21:48.824: INFO: Got endpoints: latency-svc-hhl2z [1.379254504s]
Jan  1 14:21:48.964: INFO: Created: latency-svc-plwsf
Jan  1 14:21:48.969: INFO: Got endpoints: latency-svc-plwsf [1.463967471s]
Jan  1 14:21:49.048: INFO: Created: latency-svc-tvxsn
Jan  1 14:21:49.148: INFO: Got endpoints: latency-svc-tvxsn [1.434320663s]
Jan  1 14:21:49.180: INFO: Created: latency-svc-cl7r8
Jan  1 14:21:49.182: INFO: Got endpoints: latency-svc-cl7r8 [1.446220072s]
Jan  1 14:21:49.306: INFO: Created: latency-svc-5xlmx
Jan  1 14:21:49.313: INFO: Got endpoints: latency-svc-5xlmx [1.485315998s]
Jan  1 14:21:49.363: INFO: Created: latency-svc-lc8b5
Jan  1 14:21:49.379: INFO: Got endpoints: latency-svc-lc8b5 [1.479534519s]
Jan  1 14:21:49.492: INFO: Created: latency-svc-5lrvz
Jan  1 14:21:49.508: INFO: Got endpoints: latency-svc-5lrvz [1.530795966s]
Jan  1 14:21:49.546: INFO: Created: latency-svc-xl722
Jan  1 14:21:49.614: INFO: Got endpoints: latency-svc-xl722 [1.581549336s]
Jan  1 14:21:49.660: INFO: Created: latency-svc-hpkb5
Jan  1 14:21:49.696: INFO: Got endpoints: latency-svc-hpkb5 [1.536066964s]
Jan  1 14:21:49.814: INFO: Created: latency-svc-vd9g9
Jan  1 14:21:49.822: INFO: Got endpoints: latency-svc-vd9g9 [1.603766741s]
Jan  1 14:21:49.868: INFO: Created: latency-svc-7nv65
Jan  1 14:21:49.888: INFO: Got endpoints: latency-svc-7nv65 [1.63751006s]
Jan  1 14:21:49.993: INFO: Created: latency-svc-nrpb9
Jan  1 14:21:49.999: INFO: Got endpoints: latency-svc-nrpb9 [1.609333548s]
Jan  1 14:21:50.056: INFO: Created: latency-svc-7fbth
Jan  1 14:21:50.119: INFO: Got endpoints: latency-svc-7fbth [1.663259603s]
Jan  1 14:21:50.178: INFO: Created: latency-svc-9gnk6
Jan  1 14:21:50.193: INFO: Got endpoints: latency-svc-9gnk6 [1.623544743s]
Jan  1 14:21:50.295: INFO: Created: latency-svc-27gcn
Jan  1 14:21:50.296: INFO: Got endpoints: latency-svc-27gcn [1.519047002s]
Jan  1 14:21:50.373: INFO: Created: latency-svc-qqckj
Jan  1 14:21:50.457: INFO: Got endpoints: latency-svc-qqckj [1.632185703s]
Jan  1 14:21:50.463: INFO: Created: latency-svc-vqknm
Jan  1 14:21:50.474: INFO: Got endpoints: latency-svc-vqknm [1.504005428s]
Jan  1 14:21:50.628: INFO: Created: latency-svc-tvvwd
Jan  1 14:21:50.673: INFO: Got endpoints: latency-svc-tvvwd [1.523700905s]
Jan  1 14:21:50.677: INFO: Created: latency-svc-pjdx2
Jan  1 14:21:50.689: INFO: Got endpoints: latency-svc-pjdx2 [1.506474542s]
Jan  1 14:21:50.764: INFO: Created: latency-svc-zn47h
Jan  1 14:21:50.820: INFO: Created: latency-svc-rrqfm
Jan  1 14:21:50.833: INFO: Got endpoints: latency-svc-zn47h [1.51989548s]
Jan  1 14:21:50.835: INFO: Got endpoints: latency-svc-rrqfm [1.455119559s]
Jan  1 14:21:50.946: INFO: Created: latency-svc-wr9lb
Jan  1 14:21:50.954: INFO: Got endpoints: latency-svc-wr9lb [1.446861967s]
Jan  1 14:21:51.024: INFO: Created: latency-svc-2bswr
Jan  1 14:21:51.129: INFO: Got endpoints: latency-svc-2bswr [1.513937201s]
Jan  1 14:21:51.156: INFO: Created: latency-svc-wzkkn
Jan  1 14:21:51.166: INFO: Got endpoints: latency-svc-wzkkn [1.469235889s]
Jan  1 14:21:51.196: INFO: Created: latency-svc-6vhs8
Jan  1 14:21:51.211: INFO: Got endpoints: latency-svc-6vhs8 [1.388930487s]
Jan  1 14:21:51.297: INFO: Created: latency-svc-bd6bn
Jan  1 14:21:51.302: INFO: Got endpoints: latency-svc-bd6bn [1.414056251s]
Jan  1 14:21:51.338: INFO: Created: latency-svc-d87h6
Jan  1 14:21:51.349: INFO: Got endpoints: latency-svc-d87h6 [1.349685518s]
Jan  1 14:21:51.448: INFO: Created: latency-svc-ln6df
Jan  1 14:21:51.465: INFO: Got endpoints: latency-svc-ln6df [1.34569181s]
Jan  1 14:21:51.503: INFO: Created: latency-svc-vtrp6
Jan  1 14:21:51.513: INFO: Got endpoints: latency-svc-vtrp6 [1.319502582s]
Jan  1 14:21:51.603: INFO: Created: latency-svc-fffcm
Jan  1 14:21:51.615: INFO: Got endpoints: latency-svc-fffcm [1.318752256s]
Jan  1 14:21:51.657: INFO: Created: latency-svc-dgqqs
Jan  1 14:21:51.670: INFO: Got endpoints: latency-svc-dgqqs [1.212152322s]
Jan  1 14:21:51.830: INFO: Created: latency-svc-qnd5v
Jan  1 14:21:51.841: INFO: Got endpoints: latency-svc-qnd5v [1.366581526s]
Jan  1 14:21:51.869: INFO: Created: latency-svc-clzmw
Jan  1 14:21:51.882: INFO: Got endpoints: latency-svc-clzmw [1.208424711s]
Jan  1 14:21:51.965: INFO: Created: latency-svc-86l6z
Jan  1 14:21:51.975: INFO: Got endpoints: latency-svc-86l6z [1.28633321s]
Jan  1 14:21:52.031: INFO: Created: latency-svc-rpfxl
Jan  1 14:21:52.156: INFO: Got endpoints: latency-svc-rpfxl [1.3219907s]
Jan  1 14:21:52.159: INFO: Created: latency-svc-nbncn
Jan  1 14:21:52.178: INFO: Got endpoints: latency-svc-nbncn [1.343258373s]
Jan  1 14:21:52.264: INFO: Created: latency-svc-qsbhv
Jan  1 14:21:52.404: INFO: Got endpoints: latency-svc-qsbhv [1.449606553s]
Jan  1 14:21:52.469: INFO: Created: latency-svc-f44jp
Jan  1 14:21:52.644: INFO: Got endpoints: latency-svc-f44jp [1.514036741s]
Jan  1 14:21:52.644: INFO: Latencies: [72.063886ms 206.809112ms 215.63427ms 339.691583ms 384.712699ms 545.478771ms 603.849718ms 624.68722ms 735.031233ms 797.735405ms 948.562348ms 967.612759ms 1.17948269s 1.208424711s 1.212152322s 1.264862423s 1.28633321s 1.293746868s 1.318752256s 1.319502582s 1.321794182s 1.3219907s 1.343258373s 1.34569181s 1.349685518s 1.35170054s 1.35692324s 1.358255073s 1.358356699s 1.361670893s 1.366581526s 1.367282485s 1.372649403s 1.376753372s 1.377619971s 1.377899911s 1.379254504s 1.379410553s 1.387855772s 1.388930487s 1.398484931s 1.402124809s 1.404347942s 1.414056251s 1.418645871s 1.419213611s 1.41924855s 1.423943415s 1.425857193s 1.434320663s 1.43821495s 1.438393213s 1.446220072s 1.446861967s 1.44700329s 1.449606553s 1.455119559s 1.456222955s 1.463967471s 1.4654593s 1.465729262s 1.468430591s 1.469235889s 1.474352156s 1.479534519s 1.484000905s 1.484861134s 1.485315998s 1.49558369s 1.499578086s 1.500459077s 1.504005428s 1.506474542s 1.506839441s 1.513937201s 1.514036741s 1.514212889s 1.519047002s 1.51989548s 1.520020876s 1.521295578s 1.523700905s 1.525599614s 1.525626561s 1.530502155s 1.530795966s 1.533815675s 1.536066964s 1.545404346s 1.548128768s 1.558042261s 1.563615173s 1.565995656s 1.566998478s 1.571995496s 1.580126118s 1.580500975s 1.581549336s 1.583473595s 1.58584977s 1.590659174s 1.594411789s 1.594434116s 1.595387032s 1.596931656s 1.603543209s 1.603766741s 1.609333548s 1.621799348s 1.623544743s 1.624973631s 1.626794147s 1.6274029s 1.628136146s 1.629078457s 1.629659612s 1.62977475s 1.632185703s 1.632234194s 1.6324332s 1.633364009s 1.636704992s 1.63751006s 1.642445061s 1.644638894s 1.645641443s 1.663259603s 1.667175995s 1.669361794s 1.680496854s 1.680539708s 1.683160293s 1.690637057s 1.69105548s 1.69107533s 1.701233041s 1.701684745s 1.707868669s 1.70986393s 1.713397596s 1.717366182s 1.717954485s 1.719586093s 1.719808461s 1.719968493s 1.720211633s 1.720932653s 1.724709041s 1.731118258s 1.738259621s 1.742645937s 1.748776642s 1.751213729s 1.751955965s 1.759528761s 1.767989641s 1.771978983s 1.783668174s 1.785627573s 1.786182468s 1.797318696s 1.801579242s 1.810867785s 1.815263824s 1.816882282s 1.819865295s 1.842393259s 1.847021092s 1.84841966s 1.848716463s 1.850177628s 1.856341328s 1.856792513s 1.864326541s 1.86483853s 1.880220596s 1.880368121s 1.884599767s 1.887641456s 1.889777881s 1.894151918s 1.899685587s 1.907244978s 1.910973824s 1.932765001s 1.950877721s 1.980736278s 1.983465031s 1.986944162s 1.988232454s 1.994706085s 2.014279906s 2.027751249s 2.03432253s 2.05189632s 2.055725358s 2.101312175s 2.107321096s 2.215384235s 2.226612114s]
Jan  1 14:21:52.645: INFO: 50 %ile: 1.590659174s
Jan  1 14:21:52.645: INFO: 90 %ile: 1.894151918s
Jan  1 14:21:52.645: INFO: 99 %ile: 2.215384235s
Jan  1 14:21:52.645: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 14:21:52.646: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svc-latency-6478" for this suite.
Jan  1 14:22:32.685: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 14:22:32.805: INFO: namespace svc-latency-6478 deletion completed in 40.148580895s

• [SLOW TEST:71.314 seconds]
[sig-network] Service endpoints latency
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-node] ConfigMap 
  should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 14:22:32.806: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap that has name configmap-test-emptyKey-c3fa6ae8-fdc9-4e12-b9c5-191819f16490
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 14:22:32.933: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2369" for this suite.
Jan  1 14:22:38.970: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 14:22:39.141: INFO: namespace configmap-2369 deletion completed in 6.202051487s

• [SLOW TEST:6.335 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 14:22:39.142: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Jan  1 14:22:39.368: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-9863,SelfLink:/api/v1/namespaces/watch-9863/configmaps/e2e-watch-test-label-changed,UID:dda0b634-9022-45db-b22b-da84b7ee82ee,ResourceVersion:18906572,Generation:0,CreationTimestamp:2020-01-01 14:22:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan  1 14:22:39.368: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-9863,SelfLink:/api/v1/namespaces/watch-9863/configmaps/e2e-watch-test-label-changed,UID:dda0b634-9022-45db-b22b-da84b7ee82ee,ResourceVersion:18906573,Generation:0,CreationTimestamp:2020-01-01 14:22:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Jan  1 14:22:39.369: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-9863,SelfLink:/api/v1/namespaces/watch-9863/configmaps/e2e-watch-test-label-changed,UID:dda0b634-9022-45db-b22b-da84b7ee82ee,ResourceVersion:18906574,Generation:0,CreationTimestamp:2020-01-01 14:22:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Jan  1 14:22:49.440: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-9863,SelfLink:/api/v1/namespaces/watch-9863/configmaps/e2e-watch-test-label-changed,UID:dda0b634-9022-45db-b22b-da84b7ee82ee,ResourceVersion:18906589,Generation:0,CreationTimestamp:2020-01-01 14:22:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan  1 14:22:49.441: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-9863,SelfLink:/api/v1/namespaces/watch-9863/configmaps/e2e-watch-test-label-changed,UID:dda0b634-9022-45db-b22b-da84b7ee82ee,ResourceVersion:18906590,Generation:0,CreationTimestamp:2020-01-01 14:22:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
Jan  1 14:22:49.441: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-9863,SelfLink:/api/v1/namespaces/watch-9863/configmaps/e2e-watch-test-label-changed,UID:dda0b634-9022-45db-b22b-da84b7ee82ee,ResourceVersion:18906591,Generation:0,CreationTimestamp:2020-01-01 14:22:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 14:22:49.442: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-9863" for this suite.
Jan  1 14:22:55.566: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 14:22:55.790: INFO: namespace watch-9863 deletion completed in 6.250163503s

• [SLOW TEST:16.648 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 14:22:55.791: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-f0cf685b-faa6-47ac-9f2e-f7a97cd1c125
STEP: Creating a pod to test consume configMaps
Jan  1 14:22:55.907: INFO: Waiting up to 5m0s for pod "pod-configmaps-1d61e068-48d7-4564-8cf4-27e4e9501cd9" in namespace "configmap-7697" to be "success or failure"
Jan  1 14:22:55.934: INFO: Pod "pod-configmaps-1d61e068-48d7-4564-8cf4-27e4e9501cd9": Phase="Pending", Reason="", readiness=false. Elapsed: 26.698954ms
Jan  1 14:22:57.955: INFO: Pod "pod-configmaps-1d61e068-48d7-4564-8cf4-27e4e9501cd9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048302542s
Jan  1 14:22:59.962: INFO: Pod "pod-configmaps-1d61e068-48d7-4564-8cf4-27e4e9501cd9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.054913485s
Jan  1 14:23:01.971: INFO: Pod "pod-configmaps-1d61e068-48d7-4564-8cf4-27e4e9501cd9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.064322628s
Jan  1 14:23:03.984: INFO: Pod "pod-configmaps-1d61e068-48d7-4564-8cf4-27e4e9501cd9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.077155432s
Jan  1 14:23:05.994: INFO: Pod "pod-configmaps-1d61e068-48d7-4564-8cf4-27e4e9501cd9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.087307334s
STEP: Saw pod success
Jan  1 14:23:05.995: INFO: Pod "pod-configmaps-1d61e068-48d7-4564-8cf4-27e4e9501cd9" satisfied condition "success or failure"
Jan  1 14:23:05.999: INFO: Trying to get logs from node iruya-node pod pod-configmaps-1d61e068-48d7-4564-8cf4-27e4e9501cd9 container configmap-volume-test: 
STEP: delete the pod
Jan  1 14:23:06.083: INFO: Waiting for pod pod-configmaps-1d61e068-48d7-4564-8cf4-27e4e9501cd9 to disappear
Jan  1 14:23:06.093: INFO: Pod pod-configmaps-1d61e068-48d7-4564-8cf4-27e4e9501cd9 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 14:23:06.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7697" for this suite.
Jan  1 14:23:12.178: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 14:23:12.320: INFO: namespace configmap-7697 deletion completed in 6.212529946s

• [SLOW TEST:16.529 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 14:23:12.321: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on tmpfs
Jan  1 14:23:12.425: INFO: Waiting up to 5m0s for pod "pod-757a29ca-8e53-4a5d-af1a-bcc0424e4d38" in namespace "emptydir-4176" to be "success or failure"
Jan  1 14:23:12.430: INFO: Pod "pod-757a29ca-8e53-4a5d-af1a-bcc0424e4d38": Phase="Pending", Reason="", readiness=false. Elapsed: 5.249571ms
Jan  1 14:23:14.444: INFO: Pod "pod-757a29ca-8e53-4a5d-af1a-bcc0424e4d38": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019057699s
Jan  1 14:23:16.450: INFO: Pod "pod-757a29ca-8e53-4a5d-af1a-bcc0424e4d38": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025406885s
Jan  1 14:23:18.460: INFO: Pod "pod-757a29ca-8e53-4a5d-af1a-bcc0424e4d38": Phase="Pending", Reason="", readiness=false. Elapsed: 6.03504274s
Jan  1 14:23:20.470: INFO: Pod "pod-757a29ca-8e53-4a5d-af1a-bcc0424e4d38": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.044747916s
STEP: Saw pod success
Jan  1 14:23:20.470: INFO: Pod "pod-757a29ca-8e53-4a5d-af1a-bcc0424e4d38" satisfied condition "success or failure"
Jan  1 14:23:20.478: INFO: Trying to get logs from node iruya-node pod pod-757a29ca-8e53-4a5d-af1a-bcc0424e4d38 container test-container: 
STEP: delete the pod
Jan  1 14:23:20.605: INFO: Waiting for pod pod-757a29ca-8e53-4a5d-af1a-bcc0424e4d38 to disappear
Jan  1 14:23:20.614: INFO: Pod pod-757a29ca-8e53-4a5d-af1a-bcc0424e4d38 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 14:23:20.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4176" for this suite.
Jan  1 14:23:26.652: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 14:23:26.785: INFO: namespace emptydir-4176 deletion completed in 6.164789361s

• [SLOW TEST:14.465 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 14:23:26.786: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-6a621791-1564-4607-904d-5af740cba30d
STEP: Creating a pod to test consume secrets
Jan  1 14:23:26.885: INFO: Waiting up to 5m0s for pod "pod-secrets-667a3045-b5ea-410f-b872-ebd8858c6264" in namespace "secrets-7033" to be "success or failure"
Jan  1 14:23:26.947: INFO: Pod "pod-secrets-667a3045-b5ea-410f-b872-ebd8858c6264": Phase="Pending", Reason="", readiness=false. Elapsed: 62.25241ms
Jan  1 14:23:29.069: INFO: Pod "pod-secrets-667a3045-b5ea-410f-b872-ebd8858c6264": Phase="Pending", Reason="", readiness=false. Elapsed: 2.1843135s
Jan  1 14:23:31.076: INFO: Pod "pod-secrets-667a3045-b5ea-410f-b872-ebd8858c6264": Phase="Pending", Reason="", readiness=false. Elapsed: 4.190581848s
Jan  1 14:23:33.081: INFO: Pod "pod-secrets-667a3045-b5ea-410f-b872-ebd8858c6264": Phase="Pending", Reason="", readiness=false. Elapsed: 6.196173732s
Jan  1 14:23:35.092: INFO: Pod "pod-secrets-667a3045-b5ea-410f-b872-ebd8858c6264": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.207100634s
STEP: Saw pod success
Jan  1 14:23:35.093: INFO: Pod "pod-secrets-667a3045-b5ea-410f-b872-ebd8858c6264" satisfied condition "success or failure"
Jan  1 14:23:35.102: INFO: Trying to get logs from node iruya-node pod pod-secrets-667a3045-b5ea-410f-b872-ebd8858c6264 container secret-volume-test: 
STEP: delete the pod
Jan  1 14:23:35.176: INFO: Waiting for pod pod-secrets-667a3045-b5ea-410f-b872-ebd8858c6264 to disappear
Jan  1 14:23:35.184: INFO: Pod pod-secrets-667a3045-b5ea-410f-b872-ebd8858c6264 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 14:23:35.184: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-7033" for this suite.
Jan  1 14:23:41.257: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 14:23:41.516: INFO: namespace secrets-7033 deletion completed in 6.323928915s

• [SLOW TEST:14.730 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 14:23:41.517: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Jan  1 14:23:52.212: INFO: Successfully updated pod "annotationupdate9adc3c3f-134a-4d40-acbb-96c3b7b1600c"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 14:23:54.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2632" for this suite.
Jan  1 14:24:16.320: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 14:24:16.469: INFO: namespace projected-2632 deletion completed in 22.184296873s

• [SLOW TEST:34.953 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 14:24:16.471: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating pod
Jan  1 14:24:24.736: INFO: Pod pod-hostip-20eef5b3-0f4a-46a8-a70f-9df2b03db3cc has hostIP: 10.96.3.65
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 14:24:24.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-3674" for this suite.
Jan  1 14:24:54.816: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 14:24:54.954: INFO: namespace pods-3674 deletion completed in 30.205736899s

• [SLOW TEST:38.483 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 14:24:54.955: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-221
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan  1 14:24:55.163: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan  1 14:25:27.384: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.44.0.1 8081 | grep -v '^\s*$'] Namespace:pod-network-test-221 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  1 14:25:27.385: INFO: >>> kubeConfig: /root/.kube/config
Jan  1 14:25:28.757: INFO: Found all expected endpoints: [netserver-0]
Jan  1 14:25:28.770: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:pod-network-test-221 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  1 14:25:28.770: INFO: >>> kubeConfig: /root/.kube/config
Jan  1 14:25:30.172: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 14:25:30.172: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-221" for this suite.
Jan  1 14:25:56.205: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 14:25:56.341: INFO: namespace pod-network-test-221 deletion completed in 26.158520391s

• [SLOW TEST:61.386 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 14:25:56.343: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-e69656fa-ec75-4ca8-afb4-a9d2ef1b76df
STEP: Creating a pod to test consume configMaps
Jan  1 14:25:56.491: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-48cf6a2d-34c1-484e-86d5-4002ddeaff04" in namespace "projected-9873" to be "success or failure"
Jan  1 14:25:56.500: INFO: Pod "pod-projected-configmaps-48cf6a2d-34c1-484e-86d5-4002ddeaff04": Phase="Pending", Reason="", readiness=false. Elapsed: 8.589572ms
Jan  1 14:25:58.521: INFO: Pod "pod-projected-configmaps-48cf6a2d-34c1-484e-86d5-4002ddeaff04": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029241146s
Jan  1 14:26:00.542: INFO: Pod "pod-projected-configmaps-48cf6a2d-34c1-484e-86d5-4002ddeaff04": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050664849s
Jan  1 14:26:02.574: INFO: Pod "pod-projected-configmaps-48cf6a2d-34c1-484e-86d5-4002ddeaff04": Phase="Pending", Reason="", readiness=false. Elapsed: 6.082585514s
Jan  1 14:26:04.603: INFO: Pod "pod-projected-configmaps-48cf6a2d-34c1-484e-86d5-4002ddeaff04": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.111497672s
STEP: Saw pod success
Jan  1 14:26:04.604: INFO: Pod "pod-projected-configmaps-48cf6a2d-34c1-484e-86d5-4002ddeaff04" satisfied condition "success or failure"
Jan  1 14:26:04.643: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-48cf6a2d-34c1-484e-86d5-4002ddeaff04 container projected-configmap-volume-test: 
STEP: delete the pod
Jan  1 14:26:04.730: INFO: Waiting for pod pod-projected-configmaps-48cf6a2d-34c1-484e-86d5-4002ddeaff04 to disappear
Jan  1 14:26:04.738: INFO: Pod pod-projected-configmaps-48cf6a2d-34c1-484e-86d5-4002ddeaff04 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 14:26:04.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9873" for this suite.
Jan  1 14:26:10.771: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 14:26:10.916: INFO: namespace projected-9873 deletion completed in 6.168876672s

• [SLOW TEST:14.574 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 14:26:10.917: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: validating cluster-info
Jan  1 14:26:11.649: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info'
Jan  1 14:26:13.653: INFO: stderr: ""
Jan  1 14:26:13.653: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.57:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.57:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 14:26:13.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5360" for this suite.
Jan  1 14:26:19.689: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 14:26:19.869: INFO: namespace kubectl-5360 deletion completed in 6.20398383s

• [SLOW TEST:8.952 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl cluster-info
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if Kubernetes master services is included in cluster-info  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 14:26:19.870: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: getting the auto-created API token
STEP: reading a file in the container
Jan  1 14:26:28.563: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-7780 pod-service-account-ec49109c-3781-4eb5-ab5a-6d51f72fe274 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token'
STEP: reading a file in the container
Jan  1 14:26:29.267: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-7780 pod-service-account-ec49109c-3781-4eb5-ab5a-6d51f72fe274 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt'
STEP: reading a file in the container
Jan  1 14:26:29.728: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-7780 pod-service-account-ec49109c-3781-4eb5-ab5a-6d51f72fe274 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace'
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 14:26:30.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-7780" for this suite.
Jan  1 14:26:36.224: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 14:26:36.415: INFO: namespace svcaccounts-7780 deletion completed in 6.217137642s

• [SLOW TEST:16.545 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 14:26:36.416: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Jan  1 14:26:36.513: INFO: Waiting up to 5m0s for pod "downward-api-08882d32-3ddb-4e7a-abe8-cb18321e0385" in namespace "downward-api-4960" to be "success or failure"
Jan  1 14:26:36.522: INFO: Pod "downward-api-08882d32-3ddb-4e7a-abe8-cb18321e0385": Phase="Pending", Reason="", readiness=false. Elapsed: 8.962997ms
Jan  1 14:26:38.532: INFO: Pod "downward-api-08882d32-3ddb-4e7a-abe8-cb18321e0385": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019363441s
Jan  1 14:26:40.546: INFO: Pod "downward-api-08882d32-3ddb-4e7a-abe8-cb18321e0385": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032407286s
Jan  1 14:26:42.940: INFO: Pod "downward-api-08882d32-3ddb-4e7a-abe8-cb18321e0385": Phase="Pending", Reason="", readiness=false. Elapsed: 6.427047055s
Jan  1 14:26:44.954: INFO: Pod "downward-api-08882d32-3ddb-4e7a-abe8-cb18321e0385": Phase="Pending", Reason="", readiness=false. Elapsed: 8.440909645s
Jan  1 14:26:46.963: INFO: Pod "downward-api-08882d32-3ddb-4e7a-abe8-cb18321e0385": Phase="Pending", Reason="", readiness=false. Elapsed: 10.449431302s
Jan  1 14:26:48.971: INFO: Pod "downward-api-08882d32-3ddb-4e7a-abe8-cb18321e0385": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.458144816s
STEP: Saw pod success
Jan  1 14:26:48.971: INFO: Pod "downward-api-08882d32-3ddb-4e7a-abe8-cb18321e0385" satisfied condition "success or failure"
Jan  1 14:26:48.977: INFO: Trying to get logs from node iruya-node pod downward-api-08882d32-3ddb-4e7a-abe8-cb18321e0385 container dapi-container: 
STEP: delete the pod
Jan  1 14:26:49.069: INFO: Waiting for pod downward-api-08882d32-3ddb-4e7a-abe8-cb18321e0385 to disappear
Jan  1 14:26:49.088: INFO: Pod downward-api-08882d32-3ddb-4e7a-abe8-cb18321e0385 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 14:26:49.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4960" for this suite.
Jan  1 14:26:55.224: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 14:26:55.371: INFO: namespace downward-api-4960 deletion completed in 6.184404364s

• [SLOW TEST:18.955 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 14:26:55.373: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-e1400b67-9c20-497a-9d77-832b15e9bf17
STEP: Creating a pod to test consume configMaps
Jan  1 14:26:55.567: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-dc45109a-828b-4b27-a5e8-fde96edb3a3f" in namespace "projected-1271" to be "success or failure"
Jan  1 14:26:55.573: INFO: Pod "pod-projected-configmaps-dc45109a-828b-4b27-a5e8-fde96edb3a3f": Phase="Pending", Reason="", readiness=false. Elapsed: 5.00845ms
Jan  1 14:26:57.628: INFO: Pod "pod-projected-configmaps-dc45109a-828b-4b27-a5e8-fde96edb3a3f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060695176s
Jan  1 14:26:59.644: INFO: Pod "pod-projected-configmaps-dc45109a-828b-4b27-a5e8-fde96edb3a3f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.076357774s
Jan  1 14:27:01.723: INFO: Pod "pod-projected-configmaps-dc45109a-828b-4b27-a5e8-fde96edb3a3f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.155668895s
Jan  1 14:27:03.741: INFO: Pod "pod-projected-configmaps-dc45109a-828b-4b27-a5e8-fde96edb3a3f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.173742372s
Jan  1 14:27:05.788: INFO: Pod "pod-projected-configmaps-dc45109a-828b-4b27-a5e8-fde96edb3a3f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.220289785s
STEP: Saw pod success
Jan  1 14:27:05.788: INFO: Pod "pod-projected-configmaps-dc45109a-828b-4b27-a5e8-fde96edb3a3f" satisfied condition "success or failure"
Jan  1 14:27:05.812: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-dc45109a-828b-4b27-a5e8-fde96edb3a3f container projected-configmap-volume-test: 
STEP: delete the pod
Jan  1 14:27:06.079: INFO: Waiting for pod pod-projected-configmaps-dc45109a-828b-4b27-a5e8-fde96edb3a3f to disappear
Jan  1 14:27:06.099: INFO: Pod pod-projected-configmaps-dc45109a-828b-4b27-a5e8-fde96edb3a3f no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 14:27:06.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1271" for this suite.
Jan  1 14:27:12.156: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 14:27:12.345: INFO: namespace projected-1271 deletion completed in 6.234783482s

• [SLOW TEST:16.972 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 14:27:12.346: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Jan  1 14:27:12.499: INFO: Waiting up to 5m0s for pod "downward-api-2fe6d4ce-5f9c-4c32-8eed-d2e4ebcea927" in namespace "downward-api-5892" to be "success or failure"
Jan  1 14:27:12.572: INFO: Pod "downward-api-2fe6d4ce-5f9c-4c32-8eed-d2e4ebcea927": Phase="Pending", Reason="", readiness=false. Elapsed: 71.794225ms
Jan  1 14:27:14.593: INFO: Pod "downward-api-2fe6d4ce-5f9c-4c32-8eed-d2e4ebcea927": Phase="Pending", Reason="", readiness=false. Elapsed: 2.093290724s
Jan  1 14:27:16.609: INFO: Pod "downward-api-2fe6d4ce-5f9c-4c32-8eed-d2e4ebcea927": Phase="Pending", Reason="", readiness=false. Elapsed: 4.109339427s
Jan  1 14:27:18.626: INFO: Pod "downward-api-2fe6d4ce-5f9c-4c32-8eed-d2e4ebcea927": Phase="Pending", Reason="", readiness=false. Elapsed: 6.126520763s
Jan  1 14:27:20.637: INFO: Pod "downward-api-2fe6d4ce-5f9c-4c32-8eed-d2e4ebcea927": Phase="Pending", Reason="", readiness=false. Elapsed: 8.137001396s
Jan  1 14:27:22.650: INFO: Pod "downward-api-2fe6d4ce-5f9c-4c32-8eed-d2e4ebcea927": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.150563964s
STEP: Saw pod success
Jan  1 14:27:22.651: INFO: Pod "downward-api-2fe6d4ce-5f9c-4c32-8eed-d2e4ebcea927" satisfied condition "success or failure"
Jan  1 14:27:22.655: INFO: Trying to get logs from node iruya-node pod downward-api-2fe6d4ce-5f9c-4c32-8eed-d2e4ebcea927 container dapi-container: 
STEP: delete the pod
Jan  1 14:27:22.801: INFO: Waiting for pod downward-api-2fe6d4ce-5f9c-4c32-8eed-d2e4ebcea927 to disappear
Jan  1 14:27:22.807: INFO: Pod downward-api-2fe6d4ce-5f9c-4c32-8eed-d2e4ebcea927 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 14:27:22.807: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5892" for this suite.
Jan  1 14:27:28.877: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 14:27:29.071: INFO: namespace downward-api-5892 deletion completed in 6.25913404s

• [SLOW TEST:16.726 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 14:27:29.072: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 14:27:37.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-8194" for this suite.
Jan  1 14:28:19.250: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 14:28:19.413: INFO: namespace kubelet-test-8194 deletion completed in 42.190350855s

• [SLOW TEST:50.341 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox Pod with hostAliases
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136
    should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 14:28:19.414: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Jan  1 14:28:28.331: INFO: Successfully updated pod "pod-update-152fcd56-b4d5-4e9d-911a-40fceb150c5a"
STEP: verifying the updated pod is in kubernetes
Jan  1 14:28:28.390: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 14:28:28.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-5614" for this suite.
Jan  1 14:28:50.428: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 14:28:50.565: INFO: namespace pods-5614 deletion completed in 22.16595136s

• [SLOW TEST:31.151 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 14:28:50.568: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-5b2a0c2b-0711-4f3d-8575-21a91a1a6eaa
STEP: Creating a pod to test consume configMaps
Jan  1 14:28:50.664: INFO: Waiting up to 5m0s for pod "pod-configmaps-f8d4b67b-be4b-41cb-89f7-4e8f4e403a1d" in namespace "configmap-3276" to be "success or failure"
Jan  1 14:28:50.668: INFO: Pod "pod-configmaps-f8d4b67b-be4b-41cb-89f7-4e8f4e403a1d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.683969ms
Jan  1 14:28:52.686: INFO: Pod "pod-configmaps-f8d4b67b-be4b-41cb-89f7-4e8f4e403a1d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022095957s
Jan  1 14:28:54.694: INFO: Pod "pod-configmaps-f8d4b67b-be4b-41cb-89f7-4e8f4e403a1d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030330041s
Jan  1 14:28:56.702: INFO: Pod "pod-configmaps-f8d4b67b-be4b-41cb-89f7-4e8f4e403a1d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.037910412s
Jan  1 14:28:58.714: INFO: Pod "pod-configmaps-f8d4b67b-be4b-41cb-89f7-4e8f4e403a1d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.049733803s
Jan  1 14:29:00.729: INFO: Pod "pod-configmaps-f8d4b67b-be4b-41cb-89f7-4e8f4e403a1d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.064825465s
STEP: Saw pod success
Jan  1 14:29:00.729: INFO: Pod "pod-configmaps-f8d4b67b-be4b-41cb-89f7-4e8f4e403a1d" satisfied condition "success or failure"
Jan  1 14:29:00.736: INFO: Trying to get logs from node iruya-node pod pod-configmaps-f8d4b67b-be4b-41cb-89f7-4e8f4e403a1d container configmap-volume-test: 
STEP: delete the pod
Jan  1 14:29:00.830: INFO: Waiting for pod pod-configmaps-f8d4b67b-be4b-41cb-89f7-4e8f4e403a1d to disappear
Jan  1 14:29:00.845: INFO: Pod pod-configmaps-f8d4b67b-be4b-41cb-89f7-4e8f4e403a1d no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 14:29:00.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3276" for this suite.
Jan  1 14:29:06.919: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 14:29:07.121: INFO: namespace configmap-3276 deletion completed in 6.232712435s

• [SLOW TEST:16.553 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run default 
  should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 14:29:07.122: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1420
[It] should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan  1 14:29:07.161: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-8934'
Jan  1 14:29:07.274: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan  1 14:29:07.274: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created
[AfterEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1426
Jan  1 14:29:09.302: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-8934'
Jan  1 14:29:09.505: INFO: stderr: ""
Jan  1 14:29:09.505: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 14:29:09.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8934" for this suite.
Jan  1 14:29:15.543: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 14:29:15.728: INFO: namespace kubectl-8934 deletion completed in 6.21064565s

• [SLOW TEST:8.606 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create an rc or deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 14:29:15.728: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod busybox-6f274e90-68a9-4fc2-b16a-e3421497d103 in namespace container-probe-7547
Jan  1 14:29:23.929: INFO: Started pod busybox-6f274e90-68a9-4fc2-b16a-e3421497d103 in namespace container-probe-7547
STEP: checking the pod's current state and verifying that restartCount is present
Jan  1 14:29:23.946: INFO: Initial restart count of pod busybox-6f274e90-68a9-4fc2-b16a-e3421497d103 is 0
Jan  1 14:30:22.381: INFO: Restart count of pod container-probe-7547/busybox-6f274e90-68a9-4fc2-b16a-e3421497d103 is now 1 (58.435113435s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 14:30:22.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-7547" for this suite.
Jan  1 14:30:28.575: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 14:30:28.700: INFO: namespace container-probe-7547 deletion completed in 6.160963782s

• [SLOW TEST:72.972 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 14:30:28.700: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-b4ffa02a-9f50-4016-9cdf-b20ec2caf64c
STEP: Creating a pod to test consume secrets
Jan  1 14:30:28.811: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-a31a93aa-a125-4124-b1f7-e0f38e0578b7" in namespace "projected-4891" to be "success or failure"
Jan  1 14:30:28.819: INFO: Pod "pod-projected-secrets-a31a93aa-a125-4124-b1f7-e0f38e0578b7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.331416ms
Jan  1 14:30:30.833: INFO: Pod "pod-projected-secrets-a31a93aa-a125-4124-b1f7-e0f38e0578b7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022606211s
Jan  1 14:30:32.865: INFO: Pod "pod-projected-secrets-a31a93aa-a125-4124-b1f7-e0f38e0578b7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.05412821s
Jan  1 14:30:34.876: INFO: Pod "pod-projected-secrets-a31a93aa-a125-4124-b1f7-e0f38e0578b7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.065735719s
Jan  1 14:30:36.890: INFO: Pod "pod-projected-secrets-a31a93aa-a125-4124-b1f7-e0f38e0578b7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.079153549s
STEP: Saw pod success
Jan  1 14:30:36.890: INFO: Pod "pod-projected-secrets-a31a93aa-a125-4124-b1f7-e0f38e0578b7" satisfied condition "success or failure"
Jan  1 14:30:36.898: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-a31a93aa-a125-4124-b1f7-e0f38e0578b7 container projected-secret-volume-test: 
STEP: delete the pod
Jan  1 14:30:36.976: INFO: Waiting for pod pod-projected-secrets-a31a93aa-a125-4124-b1f7-e0f38e0578b7 to disappear
Jan  1 14:30:36.985: INFO: Pod pod-projected-secrets-a31a93aa-a125-4124-b1f7-e0f38e0578b7 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 14:30:36.986: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4891" for this suite.
Jan  1 14:30:43.051: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 14:30:43.142: INFO: namespace projected-4891 deletion completed in 6.148456045s

• [SLOW TEST:14.442 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 14:30:43.143: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan  1 14:30:43.242: INFO: Pod name rollover-pod: Found 0 pods out of 1
Jan  1 14:30:48.252: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jan  1 14:30:52.270: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Jan  1 14:30:54.279: INFO: Creating deployment "test-rollover-deployment"
Jan  1 14:30:54.299: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Jan  1 14:30:56.313: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Jan  1 14:30:56.323: INFO: Ensure that both replica sets have 1 created replica
Jan  1 14:30:56.330: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Jan  1 14:30:56.345: INFO: Updating deployment test-rollover-deployment
Jan  1 14:30:56.345: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Jan  1 14:30:58.374: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Jan  1 14:30:58.396: INFO: Make sure deployment "test-rollover-deployment" is complete
Jan  1 14:30:58.409: INFO: all replica sets need to contain the pod-template-hash label
Jan  1 14:30:58.410: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713485854, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713485854, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713485856, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713485854, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  1 14:31:00.426: INFO: all replica sets need to contain the pod-template-hash label
Jan  1 14:31:00.427: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713485854, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713485854, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713485856, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713485854, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  1 14:31:02.425: INFO: all replica sets need to contain the pod-template-hash label
Jan  1 14:31:02.426: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713485854, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713485854, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713485856, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713485854, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  1 14:31:04.423: INFO: all replica sets need to contain the pod-template-hash label
Jan  1 14:31:04.424: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713485854, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713485854, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713485856, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713485854, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  1 14:31:06.429: INFO: all replica sets need to contain the pod-template-hash label
Jan  1 14:31:06.429: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713485854, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713485854, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713485866, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713485854, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  1 14:31:08.429: INFO: all replica sets need to contain the pod-template-hash label
Jan  1 14:31:08.429: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713485854, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713485854, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713485866, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713485854, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  1 14:31:10.429: INFO: all replica sets need to contain the pod-template-hash label
Jan  1 14:31:10.430: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713485854, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713485854, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713485866, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713485854, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  1 14:31:12.429: INFO: all replica sets need to contain the pod-template-hash label
Jan  1 14:31:12.429: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713485854, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713485854, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713485866, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713485854, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  1 14:31:14.422: INFO: all replica sets need to contain the pod-template-hash label
Jan  1 14:31:14.422: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713485854, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713485854, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713485866, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713485854, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  1 14:31:16.516: INFO: 
Jan  1 14:31:16.516: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:2, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713485854, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713485854, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713485876, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713485854, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  1 14:31:18.425: INFO: 
Jan  1 14:31:18.425: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Jan  1 14:31:18.441: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:deployment-2982,SelfLink:/apis/apps/v1/namespaces/deployment-2982/deployments/test-rollover-deployment,UID:417610b7-44f7-4158-a204-87889ef540a9,ResourceVersion:18907815,Generation:2,CreationTimestamp:2020-01-01 14:30:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-01-01 14:30:54 +0000 UTC 2020-01-01 14:30:54 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-01-01 14:31:16 +0000 UTC 2020-01-01 14:30:54 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-854595fc44" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Jan  1 14:31:18.452: INFO: New ReplicaSet "test-rollover-deployment-854595fc44" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44,GenerateName:,Namespace:deployment-2982,SelfLink:/apis/apps/v1/namespaces/deployment-2982/replicasets/test-rollover-deployment-854595fc44,UID:62d9c000-ed8e-4dc3-8caa-ccf8c6dfd71b,ResourceVersion:18907805,Generation:2,CreationTimestamp:2020-01-01 14:30:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 417610b7-44f7-4158-a204-87889ef540a9 0xc002bf13d7 0xc002bf13d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Jan  1 14:31:18.452: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Jan  1 14:31:18.453: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:deployment-2982,SelfLink:/apis/apps/v1/namespaces/deployment-2982/replicasets/test-rollover-controller,UID:0b02f9f9-29d4-496e-969f-a62fc36b21ec,ResourceVersion:18907814,Generation:2,CreationTimestamp:2020-01-01 14:30:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 417610b7-44f7-4158-a204-87889ef540a9 0xc002bf12d7 0xc002bf12d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan  1 14:31:18.453: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-9b8b997cf,GenerateName:,Namespace:deployment-2982,SelfLink:/apis/apps/v1/namespaces/deployment-2982/replicasets/test-rollover-deployment-9b8b997cf,UID:70705735-b084-4e5b-8dd8-ab86be793146,ResourceVersion:18907763,Generation:2,CreationTimestamp:2020-01-01 14:30:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 417610b7-44f7-4158-a204-87889ef540a9 0xc002bf14a0 0xc002bf14a1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan  1 14:31:18.462: INFO: Pod "test-rollover-deployment-854595fc44-xbtkk" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44-xbtkk,GenerateName:test-rollover-deployment-854595fc44-,Namespace:deployment-2982,SelfLink:/api/v1/namespaces/deployment-2982/pods/test-rollover-deployment-854595fc44-xbtkk,UID:c7c1e76f-18fa-4701-8e36-8a5fa3c58162,ResourceVersion:18907789,Generation:0,CreationTimestamp:2020-01-01 14:30:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-854595fc44 62d9c000-ed8e-4dc3-8caa-ccf8c6dfd71b 0xc002856237 0xc002856238}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hnbm2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hnbm2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-hnbm2 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0028562b0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0028562d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 14:30:56 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 14:31:06 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 14:31:06 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 14:30:56 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2020-01-01 14:30:56 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-01-01 14:31:04 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://d93ad7a523fb6631e9d654bc461da773f7357bdf24ebb1958e22b8d4a2658e97}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 14:31:18.463: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-2982" for this suite.
Jan  1 14:31:26.512: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 14:31:26.634: INFO: namespace deployment-2982 deletion completed in 8.159997333s

• [SLOW TEST:43.491 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 14:31:26.635: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Jan  1 14:31:35.991: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 14:31:36.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-7119" for this suite.
Jan  1 14:31:42.188: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 14:31:42.656: INFO: namespace container-runtime-7119 deletion completed in 6.569805908s

• [SLOW TEST:16.021 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 14:31:42.656: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on tmpfs
Jan  1 14:31:42.750: INFO: Waiting up to 5m0s for pod "pod-f5eb91aa-6068-47b7-a783-d8c753f3d8f2" in namespace "emptydir-9347" to be "success or failure"
Jan  1 14:31:42.809: INFO: Pod "pod-f5eb91aa-6068-47b7-a783-d8c753f3d8f2": Phase="Pending", Reason="", readiness=false. Elapsed: 58.961767ms
Jan  1 14:31:44.824: INFO: Pod "pod-f5eb91aa-6068-47b7-a783-d8c753f3d8f2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.074143665s
Jan  1 14:31:46.831: INFO: Pod "pod-f5eb91aa-6068-47b7-a783-d8c753f3d8f2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.081196871s
Jan  1 14:31:48.843: INFO: Pod "pod-f5eb91aa-6068-47b7-a783-d8c753f3d8f2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.092896505s
Jan  1 14:31:50.881: INFO: Pod "pod-f5eb91aa-6068-47b7-a783-d8c753f3d8f2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.130348854s
STEP: Saw pod success
Jan  1 14:31:50.881: INFO: Pod "pod-f5eb91aa-6068-47b7-a783-d8c753f3d8f2" satisfied condition "success or failure"
Jan  1 14:31:50.905: INFO: Trying to get logs from node iruya-node pod pod-f5eb91aa-6068-47b7-a783-d8c753f3d8f2 container test-container: 
STEP: delete the pod
Jan  1 14:31:51.072: INFO: Waiting for pod pod-f5eb91aa-6068-47b7-a783-d8c753f3d8f2 to disappear
Jan  1 14:31:51.081: INFO: Pod pod-f5eb91aa-6068-47b7-a783-d8c753f3d8f2 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 14:31:51.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9347" for this suite.
Jan  1 14:31:57.114: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 14:31:57.263: INFO: namespace emptydir-9347 deletion completed in 6.17452464s

• [SLOW TEST:14.607 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 14:31:57.263: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-3787
[It] Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating stateful set ss in namespace statefulset-3787
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-3787
Jan  1 14:31:57.417: INFO: Found 0 stateful pods, waiting for 1
Jan  1 14:32:07.427: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Jan  1 14:32:07.436: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3787 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan  1 14:32:08.155: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n"
Jan  1 14:32:08.155: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan  1 14:32:08.155: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan  1 14:32:08.169: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Jan  1 14:32:18.179: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan  1 14:32:18.179: INFO: Waiting for statefulset status.replicas updated to 0
Jan  1 14:32:18.215: INFO: POD   NODE        PHASE    GRACE  CONDITIONS
Jan  1 14:32:18.215: INFO: ss-0  iruya-node  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 14:31:57 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-01 14:32:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-01 14:32:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 14:31:57 +0000 UTC  }]
Jan  1 14:32:18.215: INFO: 
Jan  1 14:32:18.215: INFO: StatefulSet ss has not reached scale 3, at 1
Jan  1 14:32:19.226: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.981530257s
Jan  1 14:32:20.339: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.970507192s
Jan  1 14:32:21.486: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.85685816s
Jan  1 14:32:22.516: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.709815859s
Jan  1 14:32:23.527: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.680191872s
Jan  1 14:32:24.675: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.669397875s
Jan  1 14:32:25.932: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.521509652s
Jan  1 14:32:27.208: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.26486887s
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-3787
Jan  1 14:32:28.219: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3787 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  1 14:32:28.936: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n"
Jan  1 14:32:28.936: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan  1 14:32:28.936: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan  1 14:32:28.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3787 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  1 14:32:29.426: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n"
Jan  1 14:32:29.426: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan  1 14:32:29.426: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan  1 14:32:29.426: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3787 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  1 14:32:30.009: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n"
Jan  1 14:32:30.009: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan  1 14:32:30.009: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan  1 14:32:30.016: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jan  1 14:32:30.016: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jan  1 14:32:30.016: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Jan  1 14:32:30.021: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3787 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan  1 14:32:30.662: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n"
Jan  1 14:32:30.662: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan  1 14:32:30.662: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan  1 14:32:30.663: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3787 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan  1 14:32:31.070: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n"
Jan  1 14:32:31.070: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan  1 14:32:31.070: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan  1 14:32:31.071: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3787 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan  1 14:32:31.493: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n"
Jan  1 14:32:31.494: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan  1 14:32:31.494: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan  1 14:32:31.494: INFO: Waiting for statefulset status.replicas updated to 0
Jan  1 14:32:31.499: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2
Jan  1 14:32:41.516: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan  1 14:32:41.516: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Jan  1 14:32:41.516: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Jan  1 14:32:41.553: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan  1 14:32:41.554: INFO: ss-0  iruya-node                 Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 14:31:57 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-01 14:32:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-01 14:32:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 14:31:57 +0000 UTC  }]
Jan  1 14:32:41.554: INFO: ss-1  iruya-server-sfge57q7djm7  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 14:32:18 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-01 14:32:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-01 14:32:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 14:32:18 +0000 UTC  }]
Jan  1 14:32:41.554: INFO: ss-2  iruya-node                 Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 14:32:18 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-01 14:32:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-01 14:32:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 14:32:18 +0000 UTC  }]
Jan  1 14:32:41.554: INFO: 
Jan  1 14:32:41.555: INFO: StatefulSet ss has not reached scale 0, at 3
Jan  1 14:32:43.352: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan  1 14:32:43.352: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 14:31:57 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-01 14:32:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-01 14:32:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 14:31:57 +0000 UTC  }]
Jan  1 14:32:43.352: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 14:32:18 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-01 14:32:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-01 14:32:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 14:32:18 +0000 UTC  }]
Jan  1 14:32:43.353: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 14:32:18 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-01 14:32:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-01 14:32:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 14:32:18 +0000 UTC  }]
Jan  1 14:32:43.353: INFO: 
Jan  1 14:32:43.353: INFO: StatefulSet ss has not reached scale 0, at 3
Jan  1 14:32:44.362: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan  1 14:32:44.362: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 14:31:57 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-01 14:32:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-01 14:32:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 14:31:57 +0000 UTC  }]
Jan  1 14:32:44.362: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 14:32:18 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-01 14:32:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-01 14:32:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 14:32:18 +0000 UTC  }]
Jan  1 14:32:44.362: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 14:32:18 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-01 14:32:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-01 14:32:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 14:32:18 +0000 UTC  }]
Jan  1 14:32:44.362: INFO: 
Jan  1 14:32:44.362: INFO: StatefulSet ss has not reached scale 0, at 3
Jan  1 14:32:45.383: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan  1 14:32:45.383: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 14:31:57 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-01 14:32:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-01 14:32:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 14:31:57 +0000 UTC  }]
Jan  1 14:32:45.383: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 14:32:18 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-01 14:32:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-01 14:32:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 14:32:18 +0000 UTC  }]
Jan  1 14:32:45.383: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 14:32:18 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-01 14:32:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-01 14:32:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 14:32:18 +0000 UTC  }]
Jan  1 14:32:45.383: INFO: 
Jan  1 14:32:45.383: INFO: StatefulSet ss has not reached scale 0, at 3
Jan  1 14:32:46.817: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan  1 14:32:46.818: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 14:31:57 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-01 14:32:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-01 14:32:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 14:31:57 +0000 UTC  }]
Jan  1 14:32:46.818: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 14:32:18 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-01 14:32:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-01 14:32:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 14:32:18 +0000 UTC  }]
Jan  1 14:32:46.818: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 14:32:18 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-01 14:32:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-01 14:32:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 14:32:18 +0000 UTC  }]
Jan  1 14:32:46.818: INFO: 
Jan  1 14:32:46.818: INFO: StatefulSet ss has not reached scale 0, at 3
Jan  1 14:32:47.841: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan  1 14:32:47.841: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 14:31:57 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-01 14:32:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-01 14:32:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 14:31:57 +0000 UTC  }]
Jan  1 14:32:47.841: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 14:32:18 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-01 14:32:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-01 14:32:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 14:32:18 +0000 UTC  }]
Jan  1 14:32:47.841: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 14:32:18 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-01 14:32:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-01 14:32:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 14:32:18 +0000 UTC  }]
Jan  1 14:32:47.841: INFO: 
Jan  1 14:32:47.841: INFO: StatefulSet ss has not reached scale 0, at 3
Jan  1 14:32:48.853: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan  1 14:32:48.853: INFO: ss-0  iruya-node                 Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 14:31:57 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-01 14:32:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-01 14:32:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 14:31:57 +0000 UTC  }]
Jan  1 14:32:48.854: INFO: ss-1  iruya-server-sfge57q7djm7  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 14:32:18 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-01 14:32:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-01 14:32:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 14:32:18 +0000 UTC  }]
Jan  1 14:32:48.854: INFO: ss-2  iruya-node                 Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 14:32:18 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-01 14:32:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-01 14:32:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 14:32:18 +0000 UTC  }]
Jan  1 14:32:48.854: INFO: 
Jan  1 14:32:48.854: INFO: StatefulSet ss has not reached scale 0, at 3
Jan  1 14:32:49.868: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan  1 14:32:49.868: INFO: ss-1  iruya-server-sfge57q7djm7  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 14:32:18 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-01 14:32:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-01 14:32:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 14:32:18 +0000 UTC  }]
Jan  1 14:32:49.868: INFO: ss-2  iruya-node                 Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 14:32:18 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-01 14:32:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-01 14:32:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 14:32:18 +0000 UTC  }]
Jan  1 14:32:49.868: INFO: 
Jan  1 14:32:49.868: INFO: StatefulSet ss has not reached scale 0, at 2
Jan  1 14:32:50.879: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan  1 14:32:50.880: INFO: ss-1  iruya-server-sfge57q7djm7  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 14:32:18 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-01 14:32:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-01 14:32:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 14:32:18 +0000 UTC  }]
Jan  1 14:32:50.880: INFO: ss-2  iruya-node                 Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 14:32:18 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-01 14:32:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-01 14:32:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 14:32:18 +0000 UTC  }]
Jan  1 14:32:50.880: INFO: 
Jan  1 14:32:50.880: INFO: StatefulSet ss has not reached scale 0, at 2
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-3787
Jan  1 14:32:51.898: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3787 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  1 14:32:52.180: INFO: rc: 1
Jan  1 14:32:52.181: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3787 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    error: unable to upgrade connection: container not found ("nginx")
 []  0xc001897c80 exit status 1   true [0xc000378e28 0xc000378ea8 0xc000378f08] [0xc000378e28 0xc000378ea8 0xc000378f08] [0xc000378e78 0xc000378ef0] [0xba6c50 0xba6c50] 0xc002d21b00 }:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("nginx")

error:
exit status 1
Jan  1 14:33:02.182: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3787 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  1 14:33:02.360: INFO: rc: 1
Jan  1 14:33:02.361: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3787 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc001897d70 exit status 1   true [0xc000378f18 0xc000378f88 0xc000378fe0] [0xc000378f18 0xc000378f88 0xc000378fe0] [0xc000378f68 0xc000378fc0] [0xba6c50 0xba6c50] 0xc002d21f20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan  1 14:33:12.363: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3787 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  1 14:33:12.664: INFO: rc: 1
Jan  1 14:33:12.665: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3787 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc001897e30 exit status 1   true [0xc000378fe8 0xc000379068 0xc0003790a8] [0xc000378fe8 0xc000379068 0xc0003790a8] [0xc000379030 0xc000379098] [0xba6c50 0xba6c50] 0xc002c34420 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan  1 14:33:22.665: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3787 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  1 14:33:22.786: INFO: rc: 1
Jan  1 14:33:22.787: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3787 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc001897ef0 exit status 1   true [0xc0003790b8 0xc0003790d8 0xc000379140] [0xc0003790b8 0xc0003790d8 0xc000379140] [0xc0003790c8 0xc000379118] [0xba6c50 0xba6c50] 0xc002c34960 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan  1 14:33:32.787: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3787 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  1 14:33:32.932: INFO: rc: 1
Jan  1 14:33:32.932: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3787 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc001564090 exit status 1   true [0xc00139c000 0xc00139c018 0xc00139c030] [0xc00139c000 0xc00139c018 0xc00139c030] [0xc00139c010 0xc00139c028] [0xba6c50 0xba6c50] 0xc001ec81e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan  1 14:33:42.933: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3787 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  1 14:33:43.084: INFO: rc: 1
Jan  1 14:33:43.085: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3787 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc001564180 exit status 1   true [0xc00139c038 0xc00139c050 0xc00139c068] [0xc00139c038 0xc00139c050 0xc00139c068] [0xc00139c048 0xc00139c060] [0xba6c50 0xba6c50] 0xc001ec8900 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan  1 14:33:53.085: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3787 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  1 14:33:53.219: INFO: rc: 1
Jan  1 14:33:53.220: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3787 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc001564270 exit status 1   true [0xc00139c070 0xc00139c088 0xc00139c0a0] [0xc00139c070 0xc00139c088 0xc00139c0a0] [0xc00139c080 0xc00139c098] [0xba6c50 0xba6c50] 0xc001ec9c80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan  1 14:34:03.221: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3787 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  1 14:34:03.352: INFO: rc: 1
Jan  1 14:34:03.353: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3787 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc00265a0c0 exit status 1   true [0xc002820000 0xc002820018 0xc002820030] [0xc002820000 0xc002820018 0xc002820030] [0xc002820010 0xc002820028] [0xba6c50 0xba6c50] 0xc001ecd560 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan  1 14:34:13.353: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3787 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  1 14:34:13.563: INFO: rc: 1
Jan  1 14:34:13.564: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3787 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc001564330 exit status 1   true [0xc00139c0a8 0xc00139c0c0 0xc00139c0d8] [0xc00139c0a8 0xc00139c0c0 0xc00139c0d8] [0xc00139c0b8 0xc00139c0d0] [0xba6c50 0xba6c50] 0xc001ec9f80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan  1 14:34:23.565: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3787 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  1 14:34:23.722: INFO: rc: 1
Jan  1 14:34:23.722: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3787 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc00265a1b0 exit status 1   true [0xc002820038 0xc002820050 0xc002820068] [0xc002820038 0xc002820050 0xc002820068] [0xc002820048 0xc002820060] [0xba6c50 0xba6c50] 0xc0022fd140 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan  1 14:34:33.723: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3787 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  1 14:34:33.900: INFO: rc: 1
Jan  1 14:34:33.900: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3787 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0019a2150 exit status 1   true [0xc0006a8008 0xc0006a80b0 0xc0006a8100] [0xc0006a8008 0xc0006a80b0 0xc0006a8100] [0xc0006a8088 0xc0006a80f0] [0xba6c50 0xba6c50] 0xc00310a180 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan  1 14:34:43.901: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3787 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  1 14:34:44.182: INFO: rc: 1
Jan  1 14:34:44.183: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3787 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0019a2210 exit status 1   true [0xc0006a8110 0xc0006a81e8 0xc0006a8268] [0xc0006a8110 0xc0006a81e8 0xc0006a8268] [0xc0006a81e0 0xc0006a8260] [0xba6c50 0xba6c50] 0xc00310a600 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan  1 14:34:54.183: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3787 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  1 14:34:54.312: INFO: rc: 1
Jan  1 14:34:54.313: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3787 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0015643f0 exit status 1   true [0xc00139c0e0 0xc00139c0f8 0xc00139c110] [0xc00139c0e0 0xc00139c0f8 0xc00139c110] [0xc00139c0f0 0xc00139c108] [0xba6c50 0xba6c50] 0xc002546d20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan  1 14:35:04.313: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3787 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  1 14:35:04.474: INFO: rc: 1
Jan  1 14:35:04.475: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3787 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc001564510 exit status 1   true [0xc00139c118 0xc00139c130 0xc00139c148] [0xc00139c118 0xc00139c130 0xc00139c148] [0xc00139c128 0xc00139c140] [0xba6c50 0xba6c50] 0xc002547560 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan  1 14:35:14.476: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3787 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  1 14:35:14.633: INFO: rc: 1
Jan  1 14:35:14.633: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3787 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc001564600 exit status 1   true [0xc00139c150 0xc00139c168 0xc00139c180] [0xc00139c150 0xc00139c168 0xc00139c180] [0xc00139c160 0xc00139c178] [0xba6c50 0xba6c50] 0xc002547c20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan  1 14:35:24.634: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3787 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  1 14:35:24.788: INFO: rc: 1
Jan  1 14:35:24.789: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3787 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0003962d0 exit status 1   true [0xc00072a888 0xc00072ab40 0xc00072ac90] [0xc00072a888 0xc00072ab40 0xc00072ac90] [0xc00072a958 0xc00072ac10] [0xba6c50 0xba6c50] 0xc002452300 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan  1 14:35:34.791: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3787 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  1 14:35:34.959: INFO: rc: 1
Jan  1 14:35:34.960: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3787 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0015640c0 exit status 1   true [0xc00139c000 0xc00139c018 0xc00139c030] [0xc00139c000 0xc00139c018 0xc00139c030] [0xc00139c010 0xc00139c028] [0xba6c50 0xba6c50] 0xc0022fd140 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan  1 14:35:44.961: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3787 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  1 14:35:45.134: INFO: rc: 1
Jan  1 14:35:45.135: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3787 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc00265a090 exit status 1   true [0xc0006a8008 0xc0006a80b0 0xc0006a8100] [0xc0006a8008 0xc0006a80b0 0xc0006a8100] [0xc0006a8088 0xc0006a80f0] [0xba6c50 0xba6c50] 0xc001ec8360 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan  1 14:35:55.135: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3787 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  1 14:35:55.354: INFO: rc: 1
Jan  1 14:35:55.354: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3787 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc00265a1e0 exit status 1   true [0xc0006a8110 0xc0006a81e8 0xc0006a8268] [0xc0006a8110 0xc0006a81e8 0xc0006a8268] [0xc0006a81e0 0xc0006a8260] [0xba6c50 0xba6c50] 0xc001ec8960 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan  1 14:36:05.356: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3787 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  1 14:36:05.546: INFO: rc: 1
Jan  1 14:36:05.546: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3787 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0019a20f0 exit status 1   true [0xc002820000 0xc002820018 0xc002820030] [0xc002820000 0xc002820018 0xc002820030] [0xc002820010 0xc002820028] [0xba6c50 0xba6c50] 0xc002546420 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan  1 14:36:15.547: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3787 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  1 14:36:17.421: INFO: rc: 1
Jan  1 14:36:17.421: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3787 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0019a21e0 exit status 1   true [0xc002820038 0xc002820050 0xc002820068] [0xc002820038 0xc002820050 0xc002820068] [0xc002820048 0xc002820060] [0xba6c50 0xba6c50] 0xc0025472c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan  1 14:36:27.422: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3787 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  1 14:36:27.627: INFO: rc: 1
Jan  1 14:36:27.628: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3787 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc000396300 exit status 1   true [0xc00072a888 0xc00072ab40 0xc00072ac90] [0xc00072a888 0xc00072ab40 0xc00072ac90] [0xc00072a958 0xc00072ac10] [0xba6c50 0xba6c50] 0xc00310a240 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan  1 14:36:37.629: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3787 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  1 14:36:37.826: INFO: rc: 1
Jan  1 14:36:37.827: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3787 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc001564210 exit status 1   true [0xc00139c038 0xc00139c050 0xc00139c068] [0xc00139c038 0xc00139c050 0xc00139c068] [0xc00139c048 0xc00139c060] [0xba6c50 0xba6c50] 0xc001ecde60 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan  1 14:36:47.828: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3787 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  1 14:36:48.066: INFO: rc: 1
Jan  1 14:36:48.067: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3787 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc001564300 exit status 1   true [0xc00139c070 0xc00139c088 0xc00139c0a0] [0xc00139c070 0xc00139c088 0xc00139c0a0] [0xc00139c080 0xc00139c098] [0xba6c50 0xba6c50] 0xc00190fc20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan  1 14:36:58.067: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3787 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  1 14:36:58.231: INFO: rc: 1
Jan  1 14:36:58.231: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3787 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0019a2300 exit status 1   true [0xc002820070 0xc002820088 0xc0028200a0] [0xc002820070 0xc002820088 0xc0028200a0] [0xc002820080 0xc002820098] [0xba6c50 0xba6c50] 0xc002547920 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan  1 14:37:08.232: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3787 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  1 14:37:08.432: INFO: rc: 1
Jan  1 14:37:08.432: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3787 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0019a23c0 exit status 1   true [0xc0028200a8 0xc0028200c0 0xc0028200d8] [0xc0028200a8 0xc0028200c0 0xc0028200d8] [0xc0028200b8 0xc0028200d0] [0xba6c50 0xba6c50] 0xc002452180 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan  1 14:37:18.433: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3787 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  1 14:37:18.675: INFO: rc: 1
Jan  1 14:37:18.676: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3787 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc00265a300 exit status 1   true [0xc0006a8278 0xc0006a82f0 0xc0006a8348] [0xc0006a8278 0xc0006a82f0 0xc0006a8348] [0xc0006a82e8 0xc0006a8320] [0xba6c50 0xba6c50] 0xc001ec9ce0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan  1 14:37:28.677: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3787 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  1 14:37:28.812: INFO: rc: 1
Jan  1 14:37:28.817: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3787 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0019a2480 exit status 1   true [0xc0028200e0 0xc0028200f8 0xc002820110] [0xc0028200e0 0xc0028200f8 0xc002820110] [0xc0028200f0 0xc002820108] [0xba6c50 0xba6c50] 0xc002452480 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan  1 14:37:38.821: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3787 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  1 14:37:38.963: INFO: rc: 1
Jan  1 14:37:38.963: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3787 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc000396180 exit status 1   true [0xc000010010 0xc00072a958 0xc00072ac10] [0xc000010010 0xc00072a958 0xc00072ac10] [0xc00072a908 0xc00072ab80] [0xba6c50 0xba6c50] 0xc002546c60 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan  1 14:37:48.965: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3787 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  1 14:37:49.171: INFO: rc: 1
Jan  1 14:37:49.172: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3787 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc00265a0c0 exit status 1   true [0xc0006a8008 0xc0006a80b0 0xc0006a8100] [0xc0006a8008 0xc0006a80b0 0xc0006a8100] [0xc0006a8088 0xc0006a80f0] [0xba6c50 0xba6c50] 0xc001ecc420 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan  1 14:37:59.172: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3787 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  1 14:37:59.347: INFO: rc: 1
Jan  1 14:37:59.348: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: 
Jan  1 14:37:59.348: INFO: Scaling statefulset ss to 0
Jan  1 14:37:59.363: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Jan  1 14:37:59.366: INFO: Deleting all statefulset in ns statefulset-3787
Jan  1 14:37:59.369: INFO: Scaling statefulset ss to 0
Jan  1 14:37:59.376: INFO: Waiting for statefulset status.replicas updated to 0
Jan  1 14:37:59.378: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 14:37:59.440: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-3787" for this suite.
Jan  1 14:38:05.587: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 14:38:05.706: INFO: namespace statefulset-3787 deletion completed in 6.260167688s

• [SLOW TEST:368.443 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Burst scaling should run to completion even with unhealthy pods [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 14:38:05.707: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Jan  1 14:38:14.499: INFO: Successfully updated pod "labelsupdate6eaeebd5-d16c-47dd-9a27-b9bb5f082d1e"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 14:38:16.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6701" for this suite.
Jan  1 14:38:38.656: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 14:38:38.754: INFO: namespace downward-api-6701 deletion completed in 22.124127242s

• [SLOW TEST:33.047 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 14:38:38.755: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on tmpfs
Jan  1 14:38:38.844: INFO: Waiting up to 5m0s for pod "pod-6197f19d-f633-4951-a803-59c6da951714" in namespace "emptydir-3647" to be "success or failure"
Jan  1 14:38:38.907: INFO: Pod "pod-6197f19d-f633-4951-a803-59c6da951714": Phase="Pending", Reason="", readiness=false. Elapsed: 63.037367ms
Jan  1 14:38:40.920: INFO: Pod "pod-6197f19d-f633-4951-a803-59c6da951714": Phase="Pending", Reason="", readiness=false. Elapsed: 2.076072988s
Jan  1 14:38:42.934: INFO: Pod "pod-6197f19d-f633-4951-a803-59c6da951714": Phase="Pending", Reason="", readiness=false. Elapsed: 4.089577718s
Jan  1 14:38:44.957: INFO: Pod "pod-6197f19d-f633-4951-a803-59c6da951714": Phase="Pending", Reason="", readiness=false. Elapsed: 6.112216108s
Jan  1 14:38:46.967: INFO: Pod "pod-6197f19d-f633-4951-a803-59c6da951714": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.123098083s
STEP: Saw pod success
Jan  1 14:38:46.968: INFO: Pod "pod-6197f19d-f633-4951-a803-59c6da951714" satisfied condition "success or failure"
Jan  1 14:38:46.971: INFO: Trying to get logs from node iruya-node pod pod-6197f19d-f633-4951-a803-59c6da951714 container test-container: 
STEP: delete the pod
Jan  1 14:38:47.039: INFO: Waiting for pod pod-6197f19d-f633-4951-a803-59c6da951714 to disappear
Jan  1 14:38:47.110: INFO: Pod pod-6197f19d-f633-4951-a803-59c6da951714 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 14:38:47.110: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3647" for this suite.
Jan  1 14:38:53.172: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 14:38:53.288: INFO: namespace emptydir-3647 deletion completed in 6.17029231s

• [SLOW TEST:14.533 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 14:38:53.289: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Jan  1 14:39:04.068: INFO: Successfully updated pod "annotationupdate03d95242-31db-4701-beff-b5f3cb8aa4bc"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 14:39:06.413: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6073" for this suite.
Jan  1 14:39:28.472: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 14:39:28.653: INFO: namespace downward-api-6073 deletion completed in 22.226964541s

• [SLOW TEST:35.365 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 14:39:28.654: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override command
Jan  1 14:39:28.792: INFO: Waiting up to 5m0s for pod "client-containers-cf80dff1-e625-4377-af9f-ed164743915e" in namespace "containers-1840" to be "success or failure"
Jan  1 14:39:28.811: INFO: Pod "client-containers-cf80dff1-e625-4377-af9f-ed164743915e": Phase="Pending", Reason="", readiness=false. Elapsed: 19.531976ms
Jan  1 14:39:30.836: INFO: Pod "client-containers-cf80dff1-e625-4377-af9f-ed164743915e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043701043s
Jan  1 14:39:32.847: INFO: Pod "client-containers-cf80dff1-e625-4377-af9f-ed164743915e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.055014346s
Jan  1 14:39:34.861: INFO: Pod "client-containers-cf80dff1-e625-4377-af9f-ed164743915e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.068870668s
Jan  1 14:39:36.890: INFO: Pod "client-containers-cf80dff1-e625-4377-af9f-ed164743915e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.097667723s
Jan  1 14:39:38.899: INFO: Pod "client-containers-cf80dff1-e625-4377-af9f-ed164743915e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.106668193s
STEP: Saw pod success
Jan  1 14:39:38.899: INFO: Pod "client-containers-cf80dff1-e625-4377-af9f-ed164743915e" satisfied condition "success or failure"
Jan  1 14:39:38.902: INFO: Trying to get logs from node iruya-node pod client-containers-cf80dff1-e625-4377-af9f-ed164743915e container test-container: 
STEP: delete the pod
Jan  1 14:39:38.964: INFO: Waiting for pod client-containers-cf80dff1-e625-4377-af9f-ed164743915e to disappear
Jan  1 14:39:39.021: INFO: Pod client-containers-cf80dff1-e625-4377-af9f-ed164743915e no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 14:39:39.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-1840" for this suite.
Jan  1 14:39:45.057: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 14:39:45.173: INFO: namespace containers-1840 deletion completed in 6.142463308s

• [SLOW TEST:16.520 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 14:39:45.174: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating service endpoint-test2 in namespace services-5020
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5020 to expose endpoints map[]
Jan  1 14:39:45.343: INFO: Get endpoints failed (10.538795ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found
Jan  1 14:39:46.355: INFO: successfully validated that service endpoint-test2 in namespace services-5020 exposes endpoints map[] (1.02240138s elapsed)
STEP: Creating pod pod1 in namespace services-5020
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5020 to expose endpoints map[pod1:[80]]
Jan  1 14:39:50.532: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.144029151s elapsed, will retry)
Jan  1 14:39:53.613: INFO: successfully validated that service endpoint-test2 in namespace services-5020 exposes endpoints map[pod1:[80]] (7.224441261s elapsed)
STEP: Creating pod pod2 in namespace services-5020
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5020 to expose endpoints map[pod1:[80] pod2:[80]]
Jan  1 14:39:59.182: INFO: Unexpected endpoints: found map[09e6d6e2-616f-4b8a-8a59-eabf56bd1480:[80]], expected map[pod1:[80] pod2:[80]] (5.542434298s elapsed, will retry)
Jan  1 14:40:02.307: INFO: successfully validated that service endpoint-test2 in namespace services-5020 exposes endpoints map[pod1:[80] pod2:[80]] (8.667946611s elapsed)
STEP: Deleting pod pod1 in namespace services-5020
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5020 to expose endpoints map[pod2:[80]]
Jan  1 14:40:03.351: INFO: successfully validated that service endpoint-test2 in namespace services-5020 exposes endpoints map[pod2:[80]] (1.037758839s elapsed)
STEP: Deleting pod pod2 in namespace services-5020
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5020 to expose endpoints map[]
Jan  1 14:40:03.417: INFO: successfully validated that service endpoint-test2 in namespace services-5020 exposes endpoints map[] (53.376247ms elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 14:40:03.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-5020" for this suite.
Jan  1 14:40:25.586: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 14:40:25.692: INFO: namespace services-5020 deletion completed in 22.144072106s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:40.517 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 14:40:25.692: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name s-test-opt-del-9f5d3bff-06b7-4269-b39d-0e6499928805
STEP: Creating secret with name s-test-opt-upd-a0db1bb9-cc39-473e-9884-c24fa75ccb76
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-9f5d3bff-06b7-4269-b39d-0e6499928805
STEP: Updating secret s-test-opt-upd-a0db1bb9-cc39-473e-9884-c24fa75ccb76
STEP: Creating secret with name s-test-opt-create-c564d07b-e378-43d6-8bd6-7197eee72d05
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 14:42:12.759: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4656" for this suite.
Jan  1 14:42:34.809: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 14:42:34.932: INFO: namespace secrets-4656 deletion completed in 22.164363003s

• [SLOW TEST:129.240 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 14:42:34.933: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 14:42:43.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-2964" for this suite.
Jan  1 14:43:25.150: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 14:43:25.255: INFO: namespace kubelet-test-2964 deletion completed in 42.129546875s

• [SLOW TEST:50.322 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40
    should print the output to logs [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 14:43:25.256: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Jan  1 14:43:31.973: INFO: 0 pods remaining
Jan  1 14:43:31.973: INFO: 0 pods has nil DeletionTimestamp
Jan  1 14:43:31.973: INFO: 
STEP: Gathering metrics
W0101 14:43:32.720145       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan  1 14:43:32.720: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 14:43:32.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-239" for this suite.
Jan  1 14:43:42.881: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 14:43:43.018: INFO: namespace gc-239 deletion completed in 10.271629007s

• [SLOW TEST:17.763 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run deployment 
  should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 14:43:43.019: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1557
[It] should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan  1 14:43:43.123: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/apps.v1 --namespace=kubectl-5890'
Jan  1 14:43:43.270: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan  1 14:43:43.270: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the deployment e2e-test-nginx-deployment was created
STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created
[AfterEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1562
Jan  1 14:43:45.316: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-5890'
Jan  1 14:43:45.559: INFO: stderr: ""
Jan  1 14:43:45.559: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 14:43:45.560: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5890" for this suite.
Jan  1 14:43:51.591: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 14:43:51.707: INFO: namespace kubectl-5890 deletion completed in 6.138884923s

• [SLOW TEST:8.688 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 14:43:51.708: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Jan  1 14:43:51.864: INFO: Waiting up to 5m0s for pod "downward-api-7894414a-74f8-4e45-8d32-6477515f7236" in namespace "downward-api-8630" to be "success or failure"
Jan  1 14:43:51.876: INFO: Pod "downward-api-7894414a-74f8-4e45-8d32-6477515f7236": Phase="Pending", Reason="", readiness=false. Elapsed: 12.114762ms
Jan  1 14:43:53.896: INFO: Pod "downward-api-7894414a-74f8-4e45-8d32-6477515f7236": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031689455s
Jan  1 14:43:55.903: INFO: Pod "downward-api-7894414a-74f8-4e45-8d32-6477515f7236": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039320614s
Jan  1 14:43:57.913: INFO: Pod "downward-api-7894414a-74f8-4e45-8d32-6477515f7236": Phase="Pending", Reason="", readiness=false. Elapsed: 6.048775578s
Jan  1 14:43:59.926: INFO: Pod "downward-api-7894414a-74f8-4e45-8d32-6477515f7236": Phase="Pending", Reason="", readiness=false. Elapsed: 8.062374278s
Jan  1 14:44:01.939: INFO: Pod "downward-api-7894414a-74f8-4e45-8d32-6477515f7236": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.074977193s
STEP: Saw pod success
Jan  1 14:44:01.939: INFO: Pod "downward-api-7894414a-74f8-4e45-8d32-6477515f7236" satisfied condition "success or failure"
Jan  1 14:44:01.944: INFO: Trying to get logs from node iruya-node pod downward-api-7894414a-74f8-4e45-8d32-6477515f7236 container dapi-container: 
STEP: delete the pod
Jan  1 14:44:02.077: INFO: Waiting for pod downward-api-7894414a-74f8-4e45-8d32-6477515f7236 to disappear
Jan  1 14:44:02.092: INFO: Pod downward-api-7894414a-74f8-4e45-8d32-6477515f7236 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 14:44:02.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8630" for this suite.
Jan  1 14:44:08.141: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 14:44:08.295: INFO: namespace downward-api-8630 deletion completed in 6.195460939s

• [SLOW TEST:16.587 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 14:44:08.295: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan  1 14:44:08.453: INFO: Creating deployment "test-recreate-deployment"
Jan  1 14:44:08.462: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Jan  1 14:44:08.570: INFO: deployment "test-recreate-deployment" doesn't have the required revision set
Jan  1 14:44:10.628: INFO: Waiting deployment "test-recreate-deployment" to complete
Jan  1 14:44:10.639: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713486648, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713486648, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713486648, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713486648, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  1 14:44:12.656: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713486648, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713486648, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713486648, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713486648, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  1 14:44:14.646: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713486648, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713486648, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713486648, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713486648, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  1 14:44:16.885: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713486648, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713486648, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713486648, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713486648, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  1 14:44:18.651: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Jan  1 14:44:18.662: INFO: Updating deployment test-recreate-deployment
Jan  1 14:44:18.662: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Jan  1 14:44:19.085: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:deployment-8612,SelfLink:/apis/apps/v1/namespaces/deployment-8612/deployments/test-recreate-deployment,UID:bbf9e992-49a3-4353-b1e7-75e70fb8ab11,ResourceVersion:18909511,Generation:2,CreationTimestamp:2020-01-01 14:44:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-01-01 14:44:19 +0000 UTC 2020-01-01 14:44:19 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-01-01 14:44:19 +0000 UTC 2020-01-01 14:44:08 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-5c8c9cc69d" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},}

Jan  1 14:44:19.095: INFO: New ReplicaSet "test-recreate-deployment-5c8c9cc69d" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d,GenerateName:,Namespace:deployment-8612,SelfLink:/apis/apps/v1/namespaces/deployment-8612/replicasets/test-recreate-deployment-5c8c9cc69d,UID:c6a2ff29-2262-4416-b202-7cb587556b2f,ResourceVersion:18909509,Generation:1,CreationTimestamp:2020-01-01 14:44:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment bbf9e992-49a3-4353-b1e7-75e70fb8ab11 0xc001e64f17 0xc001e64f18}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan  1 14:44:19.095: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Jan  1 14:44:19.096: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-6df85df6b9,GenerateName:,Namespace:deployment-8612,SelfLink:/apis/apps/v1/namespaces/deployment-8612/replicasets/test-recreate-deployment-6df85df6b9,UID:ce065888-63fb-4a8b-900b-59a8fdd9940b,ResourceVersion:18909498,Generation:2,CreationTimestamp:2020-01-01 14:44:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment bbf9e992-49a3-4353-b1e7-75e70fb8ab11 0xc001e65097 0xc001e65098}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan  1 14:44:19.102: INFO: Pod "test-recreate-deployment-5c8c9cc69d-z9lkd" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d-z9lkd,GenerateName:test-recreate-deployment-5c8c9cc69d-,Namespace:deployment-8612,SelfLink:/api/v1/namespaces/deployment-8612/pods/test-recreate-deployment-5c8c9cc69d-z9lkd,UID:9da02915-738a-4b0c-9c36-990256bb184b,ResourceVersion:18909512,Generation:0,CreationTimestamp:2020-01-01 14:44:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-5c8c9cc69d c6a2ff29-2262-4416-b202-7cb587556b2f 0xc0028566f7 0xc0028566f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-srqlb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-srqlb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-srqlb true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002856770} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002856790}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 14:44:18 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-01 14:44:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-01 14:44:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-01 14:44:18 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-01-01 14:44:18 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 14:44:19.103: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-8612" for this suite.
Jan  1 14:44:25.162: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 14:44:25.284: INFO: namespace deployment-8612 deletion completed in 6.176360518s

• [SLOW TEST:16.990 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl expose 
  should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 14:44:25.287: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating Redis RC
Jan  1 14:44:25.462: INFO: namespace kubectl-8754
Jan  1 14:44:25.462: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8754'
Jan  1 14:44:25.895: INFO: stderr: ""
Jan  1 14:44:25.896: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Jan  1 14:44:26.912: INFO: Selector matched 1 pods for map[app:redis]
Jan  1 14:44:26.913: INFO: Found 0 / 1
Jan  1 14:44:27.905: INFO: Selector matched 1 pods for map[app:redis]
Jan  1 14:44:27.905: INFO: Found 0 / 1
Jan  1 14:44:28.904: INFO: Selector matched 1 pods for map[app:redis]
Jan  1 14:44:28.905: INFO: Found 0 / 1
Jan  1 14:44:29.908: INFO: Selector matched 1 pods for map[app:redis]
Jan  1 14:44:29.908: INFO: Found 0 / 1
Jan  1 14:44:30.909: INFO: Selector matched 1 pods for map[app:redis]
Jan  1 14:44:30.909: INFO: Found 0 / 1
Jan  1 14:44:31.908: INFO: Selector matched 1 pods for map[app:redis]
Jan  1 14:44:31.908: INFO: Found 0 / 1
Jan  1 14:44:32.911: INFO: Selector matched 1 pods for map[app:redis]
Jan  1 14:44:32.911: INFO: Found 0 / 1
Jan  1 14:44:34.038: INFO: Selector matched 1 pods for map[app:redis]
Jan  1 14:44:34.038: INFO: Found 1 / 1
Jan  1 14:44:34.038: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Jan  1 14:44:34.046: INFO: Selector matched 1 pods for map[app:redis]
Jan  1 14:44:34.046: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jan  1 14:44:34.046: INFO: wait on redis-master startup in kubectl-8754 
Jan  1 14:44:34.046: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-v5jpr redis-master --namespace=kubectl-8754'
Jan  1 14:44:34.344: INFO: stderr: ""
Jan  1 14:44:34.345: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 01 Jan 14:44:32.681 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 01 Jan 14:44:32.681 # Server started, Redis version 3.2.12\n1:M 01 Jan 14:44:32.681 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 01 Jan 14:44:32.681 * The server is now ready to accept connections on port 6379\n"
STEP: exposing RC
Jan  1 14:44:34.345: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-8754'
Jan  1 14:44:34.525: INFO: stderr: ""
Jan  1 14:44:34.526: INFO: stdout: "service/rm2 exposed\n"
Jan  1 14:44:34.537: INFO: Service rm2 in namespace kubectl-8754 found.
STEP: exposing service
Jan  1 14:44:36.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-8754'
Jan  1 14:44:36.799: INFO: stderr: ""
Jan  1 14:44:36.799: INFO: stdout: "service/rm3 exposed\n"
Jan  1 14:44:36.895: INFO: Service rm3 in namespace kubectl-8754 found.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 14:44:38.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8754" for this suite.
Jan  1 14:45:02.941: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 14:45:03.082: INFO: namespace kubectl-8754 deletion completed in 24.165733733s

• [SLOW TEST:37.796 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl expose
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create services for rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 14:45:03.083: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan  1 14:45:03.222: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c0d6f489-29b5-4094-b2be-4c36c9c3361c" in namespace "projected-7162" to be "success or failure"
Jan  1 14:45:03.368: INFO: Pod "downwardapi-volume-c0d6f489-29b5-4094-b2be-4c36c9c3361c": Phase="Pending", Reason="", readiness=false. Elapsed: 145.854582ms
Jan  1 14:45:05.375: INFO: Pod "downwardapi-volume-c0d6f489-29b5-4094-b2be-4c36c9c3361c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.15263779s
Jan  1 14:45:07.387: INFO: Pod "downwardapi-volume-c0d6f489-29b5-4094-b2be-4c36c9c3361c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.165093005s
Jan  1 14:45:09.404: INFO: Pod "downwardapi-volume-c0d6f489-29b5-4094-b2be-4c36c9c3361c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.181975623s
Jan  1 14:45:11.415: INFO: Pod "downwardapi-volume-c0d6f489-29b5-4094-b2be-4c36c9c3361c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.192408497s
STEP: Saw pod success
Jan  1 14:45:11.415: INFO: Pod "downwardapi-volume-c0d6f489-29b5-4094-b2be-4c36c9c3361c" satisfied condition "success or failure"
Jan  1 14:45:11.422: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-c0d6f489-29b5-4094-b2be-4c36c9c3361c container client-container: 
STEP: delete the pod
Jan  1 14:45:11.502: INFO: Waiting for pod downwardapi-volume-c0d6f489-29b5-4094-b2be-4c36c9c3361c to disappear
Jan  1 14:45:11.634: INFO: Pod downwardapi-volume-c0d6f489-29b5-4094-b2be-4c36c9c3361c no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 14:45:11.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7162" for this suite.
Jan  1 14:45:17.854: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 14:45:18.035: INFO: namespace projected-7162 deletion completed in 6.390225567s

• [SLOW TEST:14.952 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 14:45:18.036: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-5346
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a new StatefulSet
Jan  1 14:45:18.166: INFO: Found 0 stateful pods, waiting for 3
Jan  1 14:45:28.178: INFO: Found 2 stateful pods, waiting for 3
Jan  1 14:45:38.177: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan  1 14:45:38.177: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan  1 14:45:38.177: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Jan  1 14:45:48.180: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan  1 14:45:48.180: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan  1 14:45:48.180: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Jan  1 14:45:48.210: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5346 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan  1 14:45:48.976: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n"
Jan  1 14:45:48.976: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan  1 14:45:48.976: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Jan  1 14:45:59.034: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Jan  1 14:46:09.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5346 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  1 14:46:09.481: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n"
Jan  1 14:46:09.482: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan  1 14:46:09.482: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan  1 14:46:19.540: INFO: Waiting for StatefulSet statefulset-5346/ss2 to complete update
Jan  1 14:46:19.540: INFO: Waiting for Pod statefulset-5346/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  1 14:46:19.540: INFO: Waiting for Pod statefulset-5346/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  1 14:46:19.540: INFO: Waiting for Pod statefulset-5346/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  1 14:46:29.556: INFO: Waiting for StatefulSet statefulset-5346/ss2 to complete update
Jan  1 14:46:29.556: INFO: Waiting for Pod statefulset-5346/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  1 14:46:29.556: INFO: Waiting for Pod statefulset-5346/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  1 14:46:39.562: INFO: Waiting for StatefulSet statefulset-5346/ss2 to complete update
Jan  1 14:46:39.562: INFO: Waiting for Pod statefulset-5346/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  1 14:46:39.562: INFO: Waiting for Pod statefulset-5346/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  1 14:46:49.571: INFO: Waiting for StatefulSet statefulset-5346/ss2 to complete update
Jan  1 14:46:49.571: INFO: Waiting for Pod statefulset-5346/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  1 14:46:59.557: INFO: Waiting for StatefulSet statefulset-5346/ss2 to complete update
Jan  1 14:46:59.557: INFO: Waiting for Pod statefulset-5346/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  1 14:47:09.590: INFO: Waiting for StatefulSet statefulset-5346/ss2 to complete update
STEP: Rolling back to a previous revision
Jan  1 14:47:19.554: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5346 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan  1 14:47:21.897: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n"
Jan  1 14:47:21.897: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan  1 14:47:21.897: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan  1 14:47:31.949: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Jan  1 14:47:42.007: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5346 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  1 14:47:42.476: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n"
Jan  1 14:47:42.476: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan  1 14:47:42.476: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan  1 14:47:52.603: INFO: Waiting for StatefulSet statefulset-5346/ss2 to complete update
Jan  1 14:47:52.603: INFO: Waiting for Pod statefulset-5346/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan  1 14:47:52.603: INFO: Waiting for Pod statefulset-5346/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan  1 14:48:02.640: INFO: Waiting for StatefulSet statefulset-5346/ss2 to complete update
Jan  1 14:48:02.640: INFO: Waiting for Pod statefulset-5346/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan  1 14:48:02.640: INFO: Waiting for Pod statefulset-5346/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan  1 14:48:13.002: INFO: Waiting for StatefulSet statefulset-5346/ss2 to complete update
Jan  1 14:48:13.002: INFO: Waiting for Pod statefulset-5346/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan  1 14:48:22.635: INFO: Waiting for StatefulSet statefulset-5346/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Jan  1 14:48:32.638: INFO: Deleting all statefulset in ns statefulset-5346
Jan  1 14:48:32.658: INFO: Scaling statefulset ss2 to 0
Jan  1 14:49:12.765: INFO: Waiting for statefulset status.replicas updated to 0
Jan  1 14:49:12.774: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 14:49:12.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-5346" for this suite.
Jan  1 14:49:20.837: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 14:49:20.956: INFO: namespace statefulset-5346 deletion completed in 8.144220697s

• [SLOW TEST:242.921 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should perform rolling updates and roll backs of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 14:49:20.957: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-8894
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan  1 14:49:21.052: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan  1 14:49:51.459: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-8894 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  1 14:49:51.459: INFO: >>> kubeConfig: /root/.kube/config
Jan  1 14:49:52.068: INFO: Found all expected endpoints: [netserver-0]
Jan  1 14:49:52.079: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.44.0.1:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-8894 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  1 14:49:52.079: INFO: >>> kubeConfig: /root/.kube/config
Jan  1 14:49:52.576: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 14:49:52.577: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-8894" for this suite.
Jan  1 14:50:16.632: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 14:50:16.783: INFO: namespace pod-network-test-8894 deletion completed in 24.192642292s

• [SLOW TEST:55.826 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 14:50:16.784: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test externalName service
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4945.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-4945.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4945.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-4945.svc.cluster.local; sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan  1 14:50:29.061: INFO: File wheezy_udp@dns-test-service-3.dns-4945.svc.cluster.local from pod  dns-4945/dns-test-2120b222-3139-422a-ab63-470f3cc06763 contains '' instead of 'foo.example.com.'
Jan  1 14:50:29.082: INFO: File jessie_udp@dns-test-service-3.dns-4945.svc.cluster.local from pod  dns-4945/dns-test-2120b222-3139-422a-ab63-470f3cc06763 contains '' instead of 'foo.example.com.'
Jan  1 14:50:29.082: INFO: Lookups using dns-4945/dns-test-2120b222-3139-422a-ab63-470f3cc06763 failed for: [wheezy_udp@dns-test-service-3.dns-4945.svc.cluster.local jessie_udp@dns-test-service-3.dns-4945.svc.cluster.local]

Jan  1 14:50:34.101: INFO: DNS probes using dns-test-2120b222-3139-422a-ab63-470f3cc06763 succeeded

STEP: deleting the pod
STEP: changing the externalName to bar.example.com
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4945.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-4945.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4945.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-4945.svc.cluster.local; sleep 1; done

STEP: creating a second pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan  1 14:50:48.344: INFO: File wheezy_udp@dns-test-service-3.dns-4945.svc.cluster.local from pod  dns-4945/dns-test-9402961c-3f83-42ef-90bc-f8881193e9a4 contains '' instead of 'bar.example.com.'
Jan  1 14:50:48.351: INFO: File jessie_udp@dns-test-service-3.dns-4945.svc.cluster.local from pod  dns-4945/dns-test-9402961c-3f83-42ef-90bc-f8881193e9a4 contains '' instead of 'bar.example.com.'
Jan  1 14:50:48.352: INFO: Lookups using dns-4945/dns-test-9402961c-3f83-42ef-90bc-f8881193e9a4 failed for: [wheezy_udp@dns-test-service-3.dns-4945.svc.cluster.local jessie_udp@dns-test-service-3.dns-4945.svc.cluster.local]

Jan  1 14:50:53.368: INFO: File wheezy_udp@dns-test-service-3.dns-4945.svc.cluster.local from pod  dns-4945/dns-test-9402961c-3f83-42ef-90bc-f8881193e9a4 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jan  1 14:50:53.376: INFO: File jessie_udp@dns-test-service-3.dns-4945.svc.cluster.local from pod  dns-4945/dns-test-9402961c-3f83-42ef-90bc-f8881193e9a4 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jan  1 14:50:53.376: INFO: Lookups using dns-4945/dns-test-9402961c-3f83-42ef-90bc-f8881193e9a4 failed for: [wheezy_udp@dns-test-service-3.dns-4945.svc.cluster.local jessie_udp@dns-test-service-3.dns-4945.svc.cluster.local]

Jan  1 14:50:58.368: INFO: File wheezy_udp@dns-test-service-3.dns-4945.svc.cluster.local from pod  dns-4945/dns-test-9402961c-3f83-42ef-90bc-f8881193e9a4 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jan  1 14:50:58.378: INFO: File jessie_udp@dns-test-service-3.dns-4945.svc.cluster.local from pod  dns-4945/dns-test-9402961c-3f83-42ef-90bc-f8881193e9a4 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jan  1 14:50:58.378: INFO: Lookups using dns-4945/dns-test-9402961c-3f83-42ef-90bc-f8881193e9a4 failed for: [wheezy_udp@dns-test-service-3.dns-4945.svc.cluster.local jessie_udp@dns-test-service-3.dns-4945.svc.cluster.local]

Jan  1 14:51:03.389: INFO: DNS probes using dns-test-9402961c-3f83-42ef-90bc-f8881193e9a4 succeeded

STEP: deleting the pod
STEP: changing the service to type=ClusterIP
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4945.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-4945.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4945.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-4945.svc.cluster.local; sleep 1; done

STEP: creating a third pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan  1 14:51:17.754: INFO: File wheezy_udp@dns-test-service-3.dns-4945.svc.cluster.local from pod  dns-4945/dns-test-ef3df91f-81dd-4f9b-bb51-2dfbec866d1e contains '' instead of '10.111.201.217'
Jan  1 14:51:17.762: INFO: File jessie_udp@dns-test-service-3.dns-4945.svc.cluster.local from pod  dns-4945/dns-test-ef3df91f-81dd-4f9b-bb51-2dfbec866d1e contains '' instead of '10.111.201.217'
Jan  1 14:51:17.762: INFO: Lookups using dns-4945/dns-test-ef3df91f-81dd-4f9b-bb51-2dfbec866d1e failed for: [wheezy_udp@dns-test-service-3.dns-4945.svc.cluster.local jessie_udp@dns-test-service-3.dns-4945.svc.cluster.local]

Jan  1 14:51:22.780: INFO: DNS probes using dns-test-ef3df91f-81dd-4f9b-bb51-2dfbec866d1e succeeded

STEP: deleting the pod
STEP: deleting the test externalName service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 14:51:23.797: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-4945" for this suite.
Jan  1 14:51:31.928: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 14:51:32.062: INFO: namespace dns-4945 deletion completed in 8.242257795s

• [SLOW TEST:75.279 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 14:51:32.063: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-08c8f1fa-798c-4b52-9dde-52fd61bd38c8
STEP: Creating a pod to test consume secrets
Jan  1 14:51:32.221: INFO: Waiting up to 5m0s for pod "pod-secrets-388ddeb1-9417-407a-a25f-0db2c6c4edf4" in namespace "secrets-6699" to be "success or failure"
Jan  1 14:51:32.227: INFO: Pod "pod-secrets-388ddeb1-9417-407a-a25f-0db2c6c4edf4": Phase="Pending", Reason="", readiness=false. Elapsed: 5.8775ms
Jan  1 14:51:34.236: INFO: Pod "pod-secrets-388ddeb1-9417-407a-a25f-0db2c6c4edf4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014677503s
Jan  1 14:51:36.246: INFO: Pod "pod-secrets-388ddeb1-9417-407a-a25f-0db2c6c4edf4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024084065s
Jan  1 14:51:38.310: INFO: Pod "pod-secrets-388ddeb1-9417-407a-a25f-0db2c6c4edf4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.08865755s
Jan  1 14:51:40.327: INFO: Pod "pod-secrets-388ddeb1-9417-407a-a25f-0db2c6c4edf4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.105824134s
STEP: Saw pod success
Jan  1 14:51:40.327: INFO: Pod "pod-secrets-388ddeb1-9417-407a-a25f-0db2c6c4edf4" satisfied condition "success or failure"
Jan  1 14:51:40.334: INFO: Trying to get logs from node iruya-node pod pod-secrets-388ddeb1-9417-407a-a25f-0db2c6c4edf4 container secret-volume-test: 
STEP: delete the pod
Jan  1 14:51:40.407: INFO: Waiting for pod pod-secrets-388ddeb1-9417-407a-a25f-0db2c6c4edf4 to disappear
Jan  1 14:51:40.415: INFO: Pod pod-secrets-388ddeb1-9417-407a-a25f-0db2c6c4edf4 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 14:51:40.415: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-6699" for this suite.
Jan  1 14:51:46.492: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 14:51:46.990: INFO: namespace secrets-6699 deletion completed in 6.566621371s

• [SLOW TEST:14.927 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 14:51:46.991: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan  1 14:51:55.207: INFO: Waiting up to 5m0s for pod "client-envvars-97c0545f-e3e7-4c03-8ff4-404c470615f4" in namespace "pods-4191" to be "success or failure"
Jan  1 14:51:55.221: INFO: Pod "client-envvars-97c0545f-e3e7-4c03-8ff4-404c470615f4": Phase="Pending", Reason="", readiness=false. Elapsed: 14.043923ms
Jan  1 14:51:57.231: INFO: Pod "client-envvars-97c0545f-e3e7-4c03-8ff4-404c470615f4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023379105s
Jan  1 14:51:59.249: INFO: Pod "client-envvars-97c0545f-e3e7-4c03-8ff4-404c470615f4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041447653s
Jan  1 14:52:01.257: INFO: Pod "client-envvars-97c0545f-e3e7-4c03-8ff4-404c470615f4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.050241525s
Jan  1 14:52:03.269: INFO: Pod "client-envvars-97c0545f-e3e7-4c03-8ff4-404c470615f4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.061820641s
Jan  1 14:52:05.278: INFO: Pod "client-envvars-97c0545f-e3e7-4c03-8ff4-404c470615f4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.071160945s
STEP: Saw pod success
Jan  1 14:52:05.279: INFO: Pod "client-envvars-97c0545f-e3e7-4c03-8ff4-404c470615f4" satisfied condition "success or failure"
Jan  1 14:52:05.283: INFO: Trying to get logs from node iruya-node pod client-envvars-97c0545f-e3e7-4c03-8ff4-404c470615f4 container env3cont: 
STEP: delete the pod
Jan  1 14:52:05.452: INFO: Waiting for pod client-envvars-97c0545f-e3e7-4c03-8ff4-404c470615f4 to disappear
Jan  1 14:52:05.472: INFO: Pod client-envvars-97c0545f-e3e7-4c03-8ff4-404c470615f4 no longer exists
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 14:52:05.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-4191" for this suite.
Jan  1 14:53:07.552: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 14:53:07.873: INFO: namespace pods-4191 deletion completed in 1m2.354049345s

• [SLOW TEST:80.882 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 14:53:07.875: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on node default medium
Jan  1 14:53:08.086: INFO: Waiting up to 5m0s for pod "pod-b18adee4-0d41-4893-9a6b-418c88a4e698" in namespace "emptydir-3443" to be "success or failure"
Jan  1 14:53:08.106: INFO: Pod "pod-b18adee4-0d41-4893-9a6b-418c88a4e698": Phase="Pending", Reason="", readiness=false. Elapsed: 20.330679ms
Jan  1 14:53:10.119: INFO: Pod "pod-b18adee4-0d41-4893-9a6b-418c88a4e698": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032959116s
Jan  1 14:53:12.140: INFO: Pod "pod-b18adee4-0d41-4893-9a6b-418c88a4e698": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053453891s
Jan  1 14:53:14.151: INFO: Pod "pod-b18adee4-0d41-4893-9a6b-418c88a4e698": Phase="Pending", Reason="", readiness=false. Elapsed: 6.065241483s
Jan  1 14:53:16.160: INFO: Pod "pod-b18adee4-0d41-4893-9a6b-418c88a4e698": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.073818381s
STEP: Saw pod success
Jan  1 14:53:16.160: INFO: Pod "pod-b18adee4-0d41-4893-9a6b-418c88a4e698" satisfied condition "success or failure"
Jan  1 14:53:16.165: INFO: Trying to get logs from node iruya-node pod pod-b18adee4-0d41-4893-9a6b-418c88a4e698 container test-container: 
STEP: delete the pod
Jan  1 14:53:16.231: INFO: Waiting for pod pod-b18adee4-0d41-4893-9a6b-418c88a4e698 to disappear
Jan  1 14:53:16.235: INFO: Pod pod-b18adee4-0d41-4893-9a6b-418c88a4e698 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 14:53:16.235: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3443" for this suite.
Jan  1 14:53:22.292: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 14:53:22.436: INFO: namespace emptydir-3443 deletion completed in 6.196913655s

• [SLOW TEST:14.561 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 14:53:22.436: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan  1 14:53:22.670: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"75057349-a950-4b6f-85a6-f8d6cddf0e0a", Controller:(*bool)(0xc002823d52), BlockOwnerDeletion:(*bool)(0xc002823d53)}}
Jan  1 14:53:22.699: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"3cf88108-8e5c-4d7f-94b3-fd1bcc8737fe", Controller:(*bool)(0xc002823eea), BlockOwnerDeletion:(*bool)(0xc002823eeb)}}
Jan  1 14:53:22.726: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"1f0e47e7-c97c-4a5d-a793-c36bc8f869dd", Controller:(*bool)(0xc002805bea), BlockOwnerDeletion:(*bool)(0xc002805beb)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 14:53:27.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-7363" for this suite.
Jan  1 14:53:33.937: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 14:53:34.040: INFO: namespace gc-7363 deletion completed in 6.272587358s

• [SLOW TEST:11.604 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 14:53:34.042: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-9888
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan  1 14:53:34.144: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan  1 14:54:08.437: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:pod-network-test-9888 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  1 14:54:08.437: INFO: >>> kubeConfig: /root/.kube/config
Jan  1 14:54:09.061: INFO: Waiting for endpoints: map[]
Jan  1 14:54:09.073: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=http&host=10.44.0.1&port=8080&tries=1'] Namespace:pod-network-test-9888 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  1 14:54:09.073: INFO: >>> kubeConfig: /root/.kube/config
Jan  1 14:54:09.384: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 14:54:09.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-9888" for this suite.
Jan  1 14:54:31.433: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 14:54:31.550: INFO: namespace pod-network-test-9888 deletion completed in 22.144753417s

• [SLOW TEST:57.508 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 14:54:31.550: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override all
Jan  1 14:54:31.677: INFO: Waiting up to 5m0s for pod "client-containers-27a66798-bdcc-429b-a4b5-8b35a1691d14" in namespace "containers-5853" to be "success or failure"
Jan  1 14:54:31.689: INFO: Pod "client-containers-27a66798-bdcc-429b-a4b5-8b35a1691d14": Phase="Pending", Reason="", readiness=false. Elapsed: 11.419041ms
Jan  1 14:54:33.697: INFO: Pod "client-containers-27a66798-bdcc-429b-a4b5-8b35a1691d14": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019345604s
Jan  1 14:54:35.707: INFO: Pod "client-containers-27a66798-bdcc-429b-a4b5-8b35a1691d14": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029121037s
Jan  1 14:54:37.718: INFO: Pod "client-containers-27a66798-bdcc-429b-a4b5-8b35a1691d14": Phase="Pending", Reason="", readiness=false. Elapsed: 6.04012482s
Jan  1 14:54:39.731: INFO: Pod "client-containers-27a66798-bdcc-429b-a4b5-8b35a1691d14": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.053311479s
STEP: Saw pod success
Jan  1 14:54:39.731: INFO: Pod "client-containers-27a66798-bdcc-429b-a4b5-8b35a1691d14" satisfied condition "success or failure"
Jan  1 14:54:39.737: INFO: Trying to get logs from node iruya-node pod client-containers-27a66798-bdcc-429b-a4b5-8b35a1691d14 container test-container: 
STEP: delete the pod
Jan  1 14:54:39.935: INFO: Waiting for pod client-containers-27a66798-bdcc-429b-a4b5-8b35a1691d14 to disappear
Jan  1 14:54:39.955: INFO: Pod client-containers-27a66798-bdcc-429b-a4b5-8b35a1691d14 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 14:54:39.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-5853" for this suite.
Jan  1 14:54:46.002: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 14:54:46.125: INFO: namespace containers-5853 deletion completed in 6.158338906s

• [SLOW TEST:14.575 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 14:54:46.127: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan  1 14:54:46.260: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1194a32e-c30c-4145-9593-272febc1a963" in namespace "downward-api-7810" to be "success or failure"
Jan  1 14:54:46.275: INFO: Pod "downwardapi-volume-1194a32e-c30c-4145-9593-272febc1a963": Phase="Pending", Reason="", readiness=false. Elapsed: 15.016589ms
Jan  1 14:54:48.695: INFO: Pod "downwardapi-volume-1194a32e-c30c-4145-9593-272febc1a963": Phase="Pending", Reason="", readiness=false. Elapsed: 2.434875756s
Jan  1 14:54:50.716: INFO: Pod "downwardapi-volume-1194a32e-c30c-4145-9593-272febc1a963": Phase="Pending", Reason="", readiness=false. Elapsed: 4.455824712s
Jan  1 14:54:52.733: INFO: Pod "downwardapi-volume-1194a32e-c30c-4145-9593-272febc1a963": Phase="Pending", Reason="", readiness=false. Elapsed: 6.472679622s
Jan  1 14:54:54.746: INFO: Pod "downwardapi-volume-1194a32e-c30c-4145-9593-272febc1a963": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.486453199s
STEP: Saw pod success
Jan  1 14:54:54.747: INFO: Pod "downwardapi-volume-1194a32e-c30c-4145-9593-272febc1a963" satisfied condition "success or failure"
Jan  1 14:54:54.754: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-1194a32e-c30c-4145-9593-272febc1a963 container client-container: 
STEP: delete the pod
Jan  1 14:54:54.832: INFO: Waiting for pod downwardapi-volume-1194a32e-c30c-4145-9593-272febc1a963 to disappear
Jan  1 14:54:54.856: INFO: Pod downwardapi-volume-1194a32e-c30c-4145-9593-272febc1a963 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 14:54:54.857: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7810" for this suite.
Jan  1 14:55:00.892: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 14:55:01.082: INFO: namespace downward-api-7810 deletion completed in 6.218579166s

• [SLOW TEST:14.955 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 14:55:01.082: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name projected-secret-test-88a6e8b1-2769-4872-aceb-d6275a9a793e
STEP: Creating a pod to test consume secrets
Jan  1 14:55:01.211: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-a644b340-2ed0-49c5-8143-2541f0896448" in namespace "projected-7455" to be "success or failure"
Jan  1 14:55:01.216: INFO: Pod "pod-projected-secrets-a644b340-2ed0-49c5-8143-2541f0896448": Phase="Pending", Reason="", readiness=false. Elapsed: 4.581366ms
Jan  1 14:55:03.229: INFO: Pod "pod-projected-secrets-a644b340-2ed0-49c5-8143-2541f0896448": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017808523s
Jan  1 14:55:05.245: INFO: Pod "pod-projected-secrets-a644b340-2ed0-49c5-8143-2541f0896448": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034024444s
Jan  1 14:55:07.267: INFO: Pod "pod-projected-secrets-a644b340-2ed0-49c5-8143-2541f0896448": Phase="Pending", Reason="", readiness=false. Elapsed: 6.055256287s
Jan  1 14:55:09.277: INFO: Pod "pod-projected-secrets-a644b340-2ed0-49c5-8143-2541f0896448": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.065218315s
STEP: Saw pod success
Jan  1 14:55:09.277: INFO: Pod "pod-projected-secrets-a644b340-2ed0-49c5-8143-2541f0896448" satisfied condition "success or failure"
Jan  1 14:55:09.282: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-a644b340-2ed0-49c5-8143-2541f0896448 container secret-volume-test: 
STEP: delete the pod
Jan  1 14:55:09.351: INFO: Waiting for pod pod-projected-secrets-a644b340-2ed0-49c5-8143-2541f0896448 to disappear
Jan  1 14:55:09.390: INFO: Pod pod-projected-secrets-a644b340-2ed0-49c5-8143-2541f0896448 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 14:55:09.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7455" for this suite.
Jan  1 14:55:15.432: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 14:55:15.597: INFO: namespace projected-7455 deletion completed in 6.197786103s

• [SLOW TEST:14.515 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 14:55:15.598: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan  1 14:55:15.743: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b06d2886-21de-46cd-9858-91ff7b6ccee5" in namespace "projected-2614" to be "success or failure"
Jan  1 14:55:15.761: INFO: Pod "downwardapi-volume-b06d2886-21de-46cd-9858-91ff7b6ccee5": Phase="Pending", Reason="", readiness=false. Elapsed: 17.019128ms
Jan  1 14:55:17.776: INFO: Pod "downwardapi-volume-b06d2886-21de-46cd-9858-91ff7b6ccee5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0324992s
Jan  1 14:55:19.787: INFO: Pod "downwardapi-volume-b06d2886-21de-46cd-9858-91ff7b6ccee5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043168216s
Jan  1 14:55:21.858: INFO: Pod "downwardapi-volume-b06d2886-21de-46cd-9858-91ff7b6ccee5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.114739542s
Jan  1 14:55:23.877: INFO: Pod "downwardapi-volume-b06d2886-21de-46cd-9858-91ff7b6ccee5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.133099577s
STEP: Saw pod success
Jan  1 14:55:23.877: INFO: Pod "downwardapi-volume-b06d2886-21de-46cd-9858-91ff7b6ccee5" satisfied condition "success or failure"
Jan  1 14:55:23.894: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-b06d2886-21de-46cd-9858-91ff7b6ccee5 container client-container: 
STEP: delete the pod
Jan  1 14:55:24.066: INFO: Waiting for pod downwardapi-volume-b06d2886-21de-46cd-9858-91ff7b6ccee5 to disappear
Jan  1 14:55:24.074: INFO: Pod downwardapi-volume-b06d2886-21de-46cd-9858-91ff7b6ccee5 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 14:55:24.075: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2614" for this suite.
Jan  1 14:55:30.200: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 14:55:30.380: INFO: namespace projected-2614 deletion completed in 6.298300338s

• [SLOW TEST:14.782 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 14:55:30.380: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name cm-test-opt-del-4d296bd7-3391-4071-b566-a63850d95a26
STEP: Creating configMap with name cm-test-opt-upd-3b1659d9-6301-4a6d-8078-b2abaf45564d
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-4d296bd7-3391-4071-b566-a63850d95a26
STEP: Updating configmap cm-test-opt-upd-3b1659d9-6301-4a6d-8078-b2abaf45564d
STEP: Creating configMap with name cm-test-opt-create-56caabb5-0f0d-4611-ab69-5b697452e607
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 14:56:46.236: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1227" for this suite.
Jan  1 14:57:10.329: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 14:57:10.467: INFO: namespace projected-1227 deletion completed in 24.220730379s

• [SLOW TEST:100.087 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 14:57:10.468: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan  1 14:57:10.563: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9548c52b-fbf7-49ae-aa2b-7383c67d36fc" in namespace "projected-3300" to be "success or failure"
Jan  1 14:57:10.599: INFO: Pod "downwardapi-volume-9548c52b-fbf7-49ae-aa2b-7383c67d36fc": Phase="Pending", Reason="", readiness=false. Elapsed: 35.492401ms
Jan  1 14:57:12.610: INFO: Pod "downwardapi-volume-9548c52b-fbf7-49ae-aa2b-7383c67d36fc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046158537s
Jan  1 14:57:14.620: INFO: Pod "downwardapi-volume-9548c52b-fbf7-49ae-aa2b-7383c67d36fc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.056666243s
Jan  1 14:57:16.644: INFO: Pod "downwardapi-volume-9548c52b-fbf7-49ae-aa2b-7383c67d36fc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.080065051s
Jan  1 14:57:18.663: INFO: Pod "downwardapi-volume-9548c52b-fbf7-49ae-aa2b-7383c67d36fc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.099714102s
STEP: Saw pod success
Jan  1 14:57:18.664: INFO: Pod "downwardapi-volume-9548c52b-fbf7-49ae-aa2b-7383c67d36fc" satisfied condition "success or failure"
Jan  1 14:57:18.673: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-9548c52b-fbf7-49ae-aa2b-7383c67d36fc container client-container: 
STEP: delete the pod
Jan  1 14:57:18.792: INFO: Waiting for pod downwardapi-volume-9548c52b-fbf7-49ae-aa2b-7383c67d36fc to disappear
Jan  1 14:57:18.802: INFO: Pod downwardapi-volume-9548c52b-fbf7-49ae-aa2b-7383c67d36fc no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 14:57:18.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3300" for this suite.
Jan  1 14:57:24.955: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 14:57:25.093: INFO: namespace projected-3300 deletion completed in 6.163903425s

• [SLOW TEST:14.626 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 14:57:25.094: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-secret-dcss
STEP: Creating a pod to test atomic-volume-subpath
Jan  1 14:57:25.215: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-dcss" in namespace "subpath-1711" to be "success or failure"
Jan  1 14:57:25.224: INFO: Pod "pod-subpath-test-secret-dcss": Phase="Pending", Reason="", readiness=false. Elapsed: 9.013441ms
Jan  1 14:57:27.233: INFO: Pod "pod-subpath-test-secret-dcss": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01816895s
Jan  1 14:57:29.245: INFO: Pod "pod-subpath-test-secret-dcss": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029889449s
Jan  1 14:57:31.252: INFO: Pod "pod-subpath-test-secret-dcss": Phase="Pending", Reason="", readiness=false. Elapsed: 6.036760067s
Jan  1 14:57:33.261: INFO: Pod "pod-subpath-test-secret-dcss": Phase="Running", Reason="", readiness=true. Elapsed: 8.045305595s
Jan  1 14:57:35.276: INFO: Pod "pod-subpath-test-secret-dcss": Phase="Running", Reason="", readiness=true. Elapsed: 10.061128608s
Jan  1 14:57:37.286: INFO: Pod "pod-subpath-test-secret-dcss": Phase="Running", Reason="", readiness=true. Elapsed: 12.070493739s
Jan  1 14:57:39.296: INFO: Pod "pod-subpath-test-secret-dcss": Phase="Running", Reason="", readiness=true. Elapsed: 14.080433801s
Jan  1 14:57:41.344: INFO: Pod "pod-subpath-test-secret-dcss": Phase="Running", Reason="", readiness=true. Elapsed: 16.128266772s
Jan  1 14:57:43.352: INFO: Pod "pod-subpath-test-secret-dcss": Phase="Running", Reason="", readiness=true. Elapsed: 18.136934189s
Jan  1 14:57:45.366: INFO: Pod "pod-subpath-test-secret-dcss": Phase="Running", Reason="", readiness=true. Elapsed: 20.15093556s
Jan  1 14:57:47.375: INFO: Pod "pod-subpath-test-secret-dcss": Phase="Running", Reason="", readiness=true. Elapsed: 22.160063777s
Jan  1 14:57:49.384: INFO: Pod "pod-subpath-test-secret-dcss": Phase="Running", Reason="", readiness=true. Elapsed: 24.168895256s
Jan  1 14:57:51.397: INFO: Pod "pod-subpath-test-secret-dcss": Phase="Running", Reason="", readiness=true. Elapsed: 26.181265363s
Jan  1 14:57:53.404: INFO: Pod "pod-subpath-test-secret-dcss": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.188588696s
STEP: Saw pod success
Jan  1 14:57:53.404: INFO: Pod "pod-subpath-test-secret-dcss" satisfied condition "success or failure"
Jan  1 14:57:53.409: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-secret-dcss container test-container-subpath-secret-dcss: 
STEP: delete the pod
Jan  1 14:57:53.454: INFO: Waiting for pod pod-subpath-test-secret-dcss to disappear
Jan  1 14:57:53.459: INFO: Pod pod-subpath-test-secret-dcss no longer exists
STEP: Deleting pod pod-subpath-test-secret-dcss
Jan  1 14:57:53.459: INFO: Deleting pod "pod-subpath-test-secret-dcss" in namespace "subpath-1711"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 14:57:53.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-1711" for this suite.
Jan  1 14:57:59.487: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 14:57:59.633: INFO: namespace subpath-1711 deletion completed in 6.164915721s

• [SLOW TEST:34.540 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with secret pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 14:57:59.634: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan  1 14:57:59.822: INFO: Waiting up to 5m0s for pod "downwardapi-volume-03732023-7968-4e48-ad6b-62332995aac5" in namespace "projected-9472" to be "success or failure"
Jan  1 14:57:59.940: INFO: Pod "downwardapi-volume-03732023-7968-4e48-ad6b-62332995aac5": Phase="Pending", Reason="", readiness=false. Elapsed: 117.852851ms
Jan  1 14:58:01.951: INFO: Pod "downwardapi-volume-03732023-7968-4e48-ad6b-62332995aac5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.129198521s
Jan  1 14:58:03.979: INFO: Pod "downwardapi-volume-03732023-7968-4e48-ad6b-62332995aac5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.156463645s
Jan  1 14:58:05.986: INFO: Pod "downwardapi-volume-03732023-7968-4e48-ad6b-62332995aac5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.163434757s
Jan  1 14:58:07.994: INFO: Pod "downwardapi-volume-03732023-7968-4e48-ad6b-62332995aac5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.172024452s
STEP: Saw pod success
Jan  1 14:58:07.994: INFO: Pod "downwardapi-volume-03732023-7968-4e48-ad6b-62332995aac5" satisfied condition "success or failure"
Jan  1 14:58:07.997: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-03732023-7968-4e48-ad6b-62332995aac5 container client-container: 
STEP: delete the pod
Jan  1 14:58:08.089: INFO: Waiting for pod downwardapi-volume-03732023-7968-4e48-ad6b-62332995aac5 to disappear
Jan  1 14:58:08.168: INFO: Pod downwardapi-volume-03732023-7968-4e48-ad6b-62332995aac5 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 14:58:08.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9472" for this suite.
Jan  1 14:58:14.251: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 14:58:14.360: INFO: namespace projected-9472 deletion completed in 6.142497276s

• [SLOW TEST:14.727 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 14:58:14.361: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on tmpfs
Jan  1 14:58:14.926: INFO: Waiting up to 5m0s for pod "pod-452f99dd-fa7f-4148-8279-229d7048ff55" in namespace "emptydir-7828" to be "success or failure"
Jan  1 14:58:14.939: INFO: Pod "pod-452f99dd-fa7f-4148-8279-229d7048ff55": Phase="Pending", Reason="", readiness=false. Elapsed: 12.363351ms
Jan  1 14:58:16.951: INFO: Pod "pod-452f99dd-fa7f-4148-8279-229d7048ff55": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025031637s
Jan  1 14:58:18.959: INFO: Pod "pod-452f99dd-fa7f-4148-8279-229d7048ff55": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032284602s
Jan  1 14:58:20.974: INFO: Pod "pod-452f99dd-fa7f-4148-8279-229d7048ff55": Phase="Pending", Reason="", readiness=false. Elapsed: 6.047801916s
Jan  1 14:58:22.981: INFO: Pod "pod-452f99dd-fa7f-4148-8279-229d7048ff55": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.054946692s
STEP: Saw pod success
Jan  1 14:58:22.982: INFO: Pod "pod-452f99dd-fa7f-4148-8279-229d7048ff55" satisfied condition "success or failure"
Jan  1 14:58:22.985: INFO: Trying to get logs from node iruya-node pod pod-452f99dd-fa7f-4148-8279-229d7048ff55 container test-container: 
STEP: delete the pod
Jan  1 14:58:23.110: INFO: Waiting for pod pod-452f99dd-fa7f-4148-8279-229d7048ff55 to disappear
Jan  1 14:58:23.125: INFO: Pod pod-452f99dd-fa7f-4148-8279-229d7048ff55 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 14:58:23.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7828" for this suite.
Jan  1 14:58:29.171: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 14:58:29.315: INFO: namespace emptydir-7828 deletion completed in 6.178644681s

• [SLOW TEST:14.954 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 14:58:29.316: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test use defaults
Jan  1 14:58:29.466: INFO: Waiting up to 5m0s for pod "client-containers-cf3aafac-9b07-44e4-a992-2b8b5f4d628c" in namespace "containers-1955" to be "success or failure"
Jan  1 14:58:29.522: INFO: Pod "client-containers-cf3aafac-9b07-44e4-a992-2b8b5f4d628c": Phase="Pending", Reason="", readiness=false. Elapsed: 55.100889ms
Jan  1 14:58:31.534: INFO: Pod "client-containers-cf3aafac-9b07-44e4-a992-2b8b5f4d628c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067843186s
Jan  1 14:58:33.539: INFO: Pod "client-containers-cf3aafac-9b07-44e4-a992-2b8b5f4d628c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.072676686s
Jan  1 14:58:35.549: INFO: Pod "client-containers-cf3aafac-9b07-44e4-a992-2b8b5f4d628c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.081915409s
Jan  1 14:58:37.560: INFO: Pod "client-containers-cf3aafac-9b07-44e4-a992-2b8b5f4d628c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.093796557s
STEP: Saw pod success
Jan  1 14:58:37.561: INFO: Pod "client-containers-cf3aafac-9b07-44e4-a992-2b8b5f4d628c" satisfied condition "success or failure"
Jan  1 14:58:37.568: INFO: Trying to get logs from node iruya-node pod client-containers-cf3aafac-9b07-44e4-a992-2b8b5f4d628c container test-container: 
STEP: delete the pod
Jan  1 14:58:37.660: INFO: Waiting for pod client-containers-cf3aafac-9b07-44e4-a992-2b8b5f4d628c to disappear
Jan  1 14:58:37.664: INFO: Pod client-containers-cf3aafac-9b07-44e4-a992-2b8b5f4d628c no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 14:58:37.665: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-1955" for this suite.
Jan  1 14:58:43.699: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 14:58:43.937: INFO: namespace containers-1955 deletion completed in 6.265415608s

• [SLOW TEST:14.622 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 14:58:43.942: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Jan  1 14:58:52.681: INFO: Successfully updated pod "labelsupdate2b1fcadf-9ab4-4f0f-a3a9-b6f7adfa4b9f"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 14:58:54.788: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9476" for this suite.
Jan  1 14:59:32.865: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 14:59:32.981: INFO: namespace projected-9476 deletion completed in 38.185111151s

• [SLOW TEST:49.039 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period 
  should be submitted and removed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 14:59:32.982: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Delete Grace Period
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:47
[It] should be submitted and removed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: setting up selector
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
Jan  1 14:59:41.234: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0'
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
Jan  1 14:59:51.484: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 14:59:51.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-6319" for this suite.
Jan  1 14:59:57.534: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 14:59:57.662: INFO: namespace pods-6319 deletion completed in 6.156160619s

• [SLOW TEST:24.680 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  [k8s.io] Delete Grace Period
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be submitted and removed [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 14:59:57.666: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6241.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6241.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan  1 15:00:11.958: INFO: Unable to read jessie_udp@kubernetes.default.svc.cluster.local from pod dns-6241/dns-test-a3bcdf2b-19fc-4fc5-a111-f904fe52f5d1: the server could not find the requested resource (get pods dns-test-a3bcdf2b-19fc-4fc5-a111-f904fe52f5d1)
Jan  1 15:00:11.969: INFO: Unable to read jessie_tcp@kubernetes.default.svc.cluster.local from pod dns-6241/dns-test-a3bcdf2b-19fc-4fc5-a111-f904fe52f5d1: the server could not find the requested resource (get pods dns-test-a3bcdf2b-19fc-4fc5-a111-f904fe52f5d1)
Jan  1 15:00:11.975: INFO: Unable to read jessie_udp@PodARecord from pod dns-6241/dns-test-a3bcdf2b-19fc-4fc5-a111-f904fe52f5d1: the server could not find the requested resource (get pods dns-test-a3bcdf2b-19fc-4fc5-a111-f904fe52f5d1)
Jan  1 15:00:11.979: INFO: Unable to read jessie_tcp@PodARecord from pod dns-6241/dns-test-a3bcdf2b-19fc-4fc5-a111-f904fe52f5d1: the server could not find the requested resource (get pods dns-test-a3bcdf2b-19fc-4fc5-a111-f904fe52f5d1)
Jan  1 15:00:11.979: INFO: Lookups using dns-6241/dns-test-a3bcdf2b-19fc-4fc5-a111-f904fe52f5d1 failed for: [jessie_udp@kubernetes.default.svc.cluster.local jessie_tcp@kubernetes.default.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord]

Jan  1 15:00:17.062: INFO: DNS probes using dns-6241/dns-test-a3bcdf2b-19fc-4fc5-a111-f904fe52f5d1 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 15:00:17.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-6241" for this suite.
Jan  1 15:00:23.214: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 15:00:23.370: INFO: namespace dns-6241 deletion completed in 6.216010692s

• [SLOW TEST:25.705 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 15:00:23.373: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-6553
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace statefulset-6553
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-6553
Jan  1 15:00:23.560: INFO: Found 0 stateful pods, waiting for 1
Jan  1 15:00:33.572: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Jan  1 15:00:33.580: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6553 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan  1 15:00:35.819: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n"
Jan  1 15:00:35.820: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan  1 15:00:35.820: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan  1 15:00:35.833: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Jan  1 15:00:45.865: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan  1 15:00:45.865: INFO: Waiting for statefulset status.replicas updated to 0
Jan  1 15:00:46.102: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999996727s
Jan  1 15:00:47.110: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.873800942s
Jan  1 15:00:48.116: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.866566461s
Jan  1 15:00:49.136: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.860438619s
Jan  1 15:00:50.150: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.840390385s
Jan  1 15:00:51.161: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.826364789s
Jan  1 15:00:52.185: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.814985199s
Jan  1 15:00:53.194: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.790866806s
Jan  1 15:00:54.205: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.782199223s
Jan  1 15:00:55.225: INFO: Verifying statefulset ss doesn't scale past 1 for another 770.334664ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-6553
Jan  1 15:00:56.236: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6553 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  1 15:00:56.856: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n"
Jan  1 15:00:56.856: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan  1 15:00:56.856: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan  1 15:00:56.869: INFO: Found 1 stateful pods, waiting for 3
Jan  1 15:01:06.890: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jan  1 15:01:06.891: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jan  1 15:01:06.891: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false
Jan  1 15:01:16.883: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jan  1 15:01:16.883: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jan  1 15:01:16.883: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Jan  1 15:01:16.892: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6553 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan  1 15:01:17.405: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n"
Jan  1 15:01:17.405: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan  1 15:01:17.405: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan  1 15:01:17.405: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6553 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan  1 15:01:17.700: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n"
Jan  1 15:01:17.700: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan  1 15:01:17.700: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan  1 15:01:17.701: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6553 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan  1 15:01:18.238: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n"
Jan  1 15:01:18.238: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan  1 15:01:18.238: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan  1 15:01:18.238: INFO: Waiting for statefulset status.replicas updated to 0
Jan  1 15:01:18.269: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2
Jan  1 15:01:28.335: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan  1 15:01:28.335: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Jan  1 15:01:28.335: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Jan  1 15:01:28.358: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999376s
Jan  1 15:01:29.369: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.995141888s
Jan  1 15:01:30.404: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.984076988s
Jan  1 15:01:31.418: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.949369062s
Jan  1 15:01:32.426: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.934825176s
Jan  1 15:01:33.674: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.92727037s
Jan  1 15:01:34.685: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.678485352s
Jan  1 15:01:35.694: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.668490161s
Jan  1 15:01:36.721: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.659488905s
Jan  1 15:01:37.740: INFO: Verifying statefulset ss doesn't scale past 3 for another 632.482345ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-6553
Jan  1 15:01:38.753: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6553 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  1 15:01:39.431: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n"
Jan  1 15:01:39.431: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan  1 15:01:39.431: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan  1 15:01:39.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6553 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  1 15:01:39.832: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n"
Jan  1 15:01:39.832: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan  1 15:01:39.832: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan  1 15:01:39.833: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6553 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  1 15:01:40.240: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n"
Jan  1 15:01:40.240: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan  1 15:01:40.240: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan  1 15:01:40.240: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Jan  1 15:02:20.270: INFO: Deleting all statefulset in ns statefulset-6553
Jan  1 15:02:20.276: INFO: Scaling statefulset ss to 0
Jan  1 15:02:20.289: INFO: Waiting for statefulset status.replicas updated to 0
Jan  1 15:02:20.293: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 15:02:20.315: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-6553" for this suite.
Jan  1 15:02:26.337: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 15:02:26.488: INFO: namespace statefulset-6553 deletion completed in 6.166683003s

• [SLOW TEST:123.115 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 15:02:26.489: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: validating api versions
Jan  1 15:02:26.580: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Jan  1 15:02:26.792: INFO: stderr: ""
Jan  1 15:02:26.792: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 15:02:26.792: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7430" for this suite.
Jan  1 15:02:32.818: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 15:02:32.975: INFO: namespace kubectl-7430 deletion completed in 6.17750093s

• [SLOW TEST:6.486 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl api-versions
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if v1 is in available api versions  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 15:02:32.977: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a replication controller
Jan  1 15:02:33.040: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7149'
Jan  1 15:02:33.502: INFO: stderr: ""
Jan  1 15:02:33.502: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan  1 15:02:33.503: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7149'
Jan  1 15:02:33.696: INFO: stderr: ""
Jan  1 15:02:33.696: INFO: stdout: "update-demo-nautilus-jwgtl update-demo-nautilus-mrbrh "
Jan  1 15:02:33.697: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jwgtl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7149'
Jan  1 15:02:33.823: INFO: stderr: ""
Jan  1 15:02:33.824: INFO: stdout: ""
Jan  1 15:02:33.824: INFO: update-demo-nautilus-jwgtl is created but not running
Jan  1 15:02:38.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7149'
Jan  1 15:02:38.984: INFO: stderr: ""
Jan  1 15:02:38.985: INFO: stdout: "update-demo-nautilus-jwgtl update-demo-nautilus-mrbrh "
Jan  1 15:02:38.985: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jwgtl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7149'
Jan  1 15:02:40.083: INFO: stderr: ""
Jan  1 15:02:40.083: INFO: stdout: ""
Jan  1 15:02:40.083: INFO: update-demo-nautilus-jwgtl is created but not running
Jan  1 15:02:45.084: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7149'
Jan  1 15:02:45.281: INFO: stderr: ""
Jan  1 15:02:45.281: INFO: stdout: "update-demo-nautilus-jwgtl update-demo-nautilus-mrbrh "
Jan  1 15:02:45.281: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jwgtl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7149'
Jan  1 15:02:45.419: INFO: stderr: ""
Jan  1 15:02:45.419: INFO: stdout: "true"
Jan  1 15:02:45.419: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jwgtl -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7149'
Jan  1 15:02:45.543: INFO: stderr: ""
Jan  1 15:02:45.544: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  1 15:02:45.544: INFO: validating pod update-demo-nautilus-jwgtl
Jan  1 15:02:45.565: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  1 15:02:45.565: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  1 15:02:45.565: INFO: update-demo-nautilus-jwgtl is verified up and running
Jan  1 15:02:45.565: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mrbrh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7149'
Jan  1 15:02:45.684: INFO: stderr: ""
Jan  1 15:02:45.684: INFO: stdout: "true"
Jan  1 15:02:45.685: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mrbrh -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7149'
Jan  1 15:02:45.831: INFO: stderr: ""
Jan  1 15:02:45.831: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  1 15:02:45.832: INFO: validating pod update-demo-nautilus-mrbrh
Jan  1 15:02:45.869: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  1 15:02:45.869: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  1 15:02:45.869: INFO: update-demo-nautilus-mrbrh is verified up and running
STEP: using delete to clean up resources
Jan  1 15:02:45.869: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7149'
Jan  1 15:02:46.064: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  1 15:02:46.064: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Jan  1 15:02:46.065: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-7149'
Jan  1 15:02:46.203: INFO: stderr: "No resources found.\n"
Jan  1 15:02:46.203: INFO: stdout: ""
Jan  1 15:02:46.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-7149 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan  1 15:02:46.451: INFO: stderr: ""
Jan  1 15:02:46.451: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 15:02:46.451: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7149" for this suite.
Jan  1 15:03:08.512: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 15:03:08.857: INFO: namespace kubectl-7149 deletion completed in 22.390376566s

• [SLOW TEST:35.881 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create and stop a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 15:03:08.860: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan  1 15:03:08.965: INFO: Waiting up to 5m0s for pod "downwardapi-volume-98a09855-d8e8-4a62-ab4d-b7d68a9319e4" in namespace "downward-api-6437" to be "success or failure"
Jan  1 15:03:09.027: INFO: Pod "downwardapi-volume-98a09855-d8e8-4a62-ab4d-b7d68a9319e4": Phase="Pending", Reason="", readiness=false. Elapsed: 61.694401ms
Jan  1 15:03:11.055: INFO: Pod "downwardapi-volume-98a09855-d8e8-4a62-ab4d-b7d68a9319e4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.089414651s
Jan  1 15:03:13.078: INFO: Pod "downwardapi-volume-98a09855-d8e8-4a62-ab4d-b7d68a9319e4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.112435313s
Jan  1 15:03:15.090: INFO: Pod "downwardapi-volume-98a09855-d8e8-4a62-ab4d-b7d68a9319e4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.124578549s
Jan  1 15:03:17.115: INFO: Pod "downwardapi-volume-98a09855-d8e8-4a62-ab4d-b7d68a9319e4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.149527092s
STEP: Saw pod success
Jan  1 15:03:17.115: INFO: Pod "downwardapi-volume-98a09855-d8e8-4a62-ab4d-b7d68a9319e4" satisfied condition "success or failure"
Jan  1 15:03:17.119: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-98a09855-d8e8-4a62-ab4d-b7d68a9319e4 container client-container: 
STEP: delete the pod
Jan  1 15:03:17.479: INFO: Waiting for pod downwardapi-volume-98a09855-d8e8-4a62-ab4d-b7d68a9319e4 to disappear
Jan  1 15:03:17.484: INFO: Pod downwardapi-volume-98a09855-d8e8-4a62-ab4d-b7d68a9319e4 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 15:03:17.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6437" for this suite.
Jan  1 15:03:23.565: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 15:03:23.718: INFO: namespace downward-api-6437 deletion completed in 6.227747292s

• [SLOW TEST:14.858 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 15:03:23.719: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 15:03:32.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-9804" for this suite.
Jan  1 15:03:55.240: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 15:03:55.430: INFO: namespace replication-controller-9804 deletion completed in 22.488462533s

• [SLOW TEST:31.712 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 15:03:55.432: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating Redis RC
Jan  1 15:03:55.568: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1706'
Jan  1 15:03:56.134: INFO: stderr: ""
Jan  1 15:03:56.134: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Jan  1 15:03:57.146: INFO: Selector matched 1 pods for map[app:redis]
Jan  1 15:03:57.146: INFO: Found 0 / 1
Jan  1 15:03:58.145: INFO: Selector matched 1 pods for map[app:redis]
Jan  1 15:03:58.145: INFO: Found 0 / 1
Jan  1 15:03:59.145: INFO: Selector matched 1 pods for map[app:redis]
Jan  1 15:03:59.145: INFO: Found 0 / 1
Jan  1 15:04:00.143: INFO: Selector matched 1 pods for map[app:redis]
Jan  1 15:04:00.143: INFO: Found 0 / 1
Jan  1 15:04:01.152: INFO: Selector matched 1 pods for map[app:redis]
Jan  1 15:04:01.153: INFO: Found 0 / 1
Jan  1 15:04:02.144: INFO: Selector matched 1 pods for map[app:redis]
Jan  1 15:04:02.144: INFO: Found 0 / 1
Jan  1 15:04:03.142: INFO: Selector matched 1 pods for map[app:redis]
Jan  1 15:04:03.142: INFO: Found 0 / 1
Jan  1 15:04:04.143: INFO: Selector matched 1 pods for map[app:redis]
Jan  1 15:04:04.143: INFO: Found 1 / 1
Jan  1 15:04:04.143: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Jan  1 15:04:04.147: INFO: Selector matched 1 pods for map[app:redis]
Jan  1 15:04:04.148: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jan  1 15:04:04.148: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-vwjb4 --namespace=kubectl-1706 -p {"metadata":{"annotations":{"x":"y"}}}'
Jan  1 15:04:04.299: INFO: stderr: ""
Jan  1 15:04:04.299: INFO: stdout: "pod/redis-master-vwjb4 patched\n"
STEP: checking annotations
Jan  1 15:04:04.329: INFO: Selector matched 1 pods for map[app:redis]
Jan  1 15:04:04.329: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 15:04:04.329: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1706" for this suite.
Jan  1 15:04:26.358: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 15:04:26.573: INFO: namespace kubectl-1706 deletion completed in 22.236487933s

• [SLOW TEST:31.142 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl patch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should add annotations for pods in rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 15:04:26.575: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod liveness-b0b5f082-2495-42ef-9aa3-3cdaa18b5e21 in namespace container-probe-7600
Jan  1 15:04:34.819: INFO: Started pod liveness-b0b5f082-2495-42ef-9aa3-3cdaa18b5e21 in namespace container-probe-7600
STEP: checking the pod's current state and verifying that restartCount is present
Jan  1 15:04:34.829: INFO: Initial restart count of pod liveness-b0b5f082-2495-42ef-9aa3-3cdaa18b5e21 is 0
Jan  1 15:04:57.004: INFO: Restart count of pod container-probe-7600/liveness-b0b5f082-2495-42ef-9aa3-3cdaa18b5e21 is now 1 (22.174949011s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 15:04:57.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-7600" for this suite.
Jan  1 15:05:03.157: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 15:05:03.314: INFO: namespace container-probe-7600 deletion completed in 6.242331956s

• [SLOW TEST:36.740 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 15:05:03.315: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-map-cf59c683-2b63-4a28-a452-6fbc3f224831
STEP: Creating a pod to test consume secrets
Jan  1 15:05:03.410: INFO: Waiting up to 5m0s for pod "pod-secrets-ed1e3e32-d765-46f6-abc6-13e75564d338" in namespace "secrets-6639" to be "success or failure"
Jan  1 15:05:03.484: INFO: Pod "pod-secrets-ed1e3e32-d765-46f6-abc6-13e75564d338": Phase="Pending", Reason="", readiness=false. Elapsed: 74.018057ms
Jan  1 15:05:05.493: INFO: Pod "pod-secrets-ed1e3e32-d765-46f6-abc6-13e75564d338": Phase="Pending", Reason="", readiness=false. Elapsed: 2.08225231s
Jan  1 15:05:07.502: INFO: Pod "pod-secrets-ed1e3e32-d765-46f6-abc6-13e75564d338": Phase="Pending", Reason="", readiness=false. Elapsed: 4.091289572s
Jan  1 15:05:09.516: INFO: Pod "pod-secrets-ed1e3e32-d765-46f6-abc6-13e75564d338": Phase="Pending", Reason="", readiness=false. Elapsed: 6.105619224s
Jan  1 15:05:11.524: INFO: Pod "pod-secrets-ed1e3e32-d765-46f6-abc6-13e75564d338": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.113448403s
STEP: Saw pod success
Jan  1 15:05:11.524: INFO: Pod "pod-secrets-ed1e3e32-d765-46f6-abc6-13e75564d338" satisfied condition "success or failure"
Jan  1 15:05:11.528: INFO: Trying to get logs from node iruya-node pod pod-secrets-ed1e3e32-d765-46f6-abc6-13e75564d338 container secret-volume-test: 
STEP: delete the pod
Jan  1 15:05:11.624: INFO: Waiting for pod pod-secrets-ed1e3e32-d765-46f6-abc6-13e75564d338 to disappear
Jan  1 15:05:11.634: INFO: Pod pod-secrets-ed1e3e32-d765-46f6-abc6-13e75564d338 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 15:05:11.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-6639" for this suite.
Jan  1 15:05:17.665: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 15:05:17.919: INFO: namespace secrets-6639 deletion completed in 6.277457509s

• [SLOW TEST:14.604 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 15:05:17.921: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan  1 15:05:18.051: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1b21def1-7a47-44bd-be97-135ff41504a8" in namespace "downward-api-607" to be "success or failure"
Jan  1 15:05:18.062: INFO: Pod "downwardapi-volume-1b21def1-7a47-44bd-be97-135ff41504a8": Phase="Pending", Reason="", readiness=false. Elapsed: 11.36743ms
Jan  1 15:05:20.073: INFO: Pod "downwardapi-volume-1b21def1-7a47-44bd-be97-135ff41504a8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02202001s
Jan  1 15:05:22.079: INFO: Pod "downwardapi-volume-1b21def1-7a47-44bd-be97-135ff41504a8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028143691s
Jan  1 15:05:24.089: INFO: Pod "downwardapi-volume-1b21def1-7a47-44bd-be97-135ff41504a8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.037923023s
Jan  1 15:05:26.102: INFO: Pod "downwardapi-volume-1b21def1-7a47-44bd-be97-135ff41504a8": Phase="Running", Reason="", readiness=true. Elapsed: 8.051187103s
Jan  1 15:05:28.111: INFO: Pod "downwardapi-volume-1b21def1-7a47-44bd-be97-135ff41504a8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.059972095s
STEP: Saw pod success
Jan  1 15:05:28.111: INFO: Pod "downwardapi-volume-1b21def1-7a47-44bd-be97-135ff41504a8" satisfied condition "success or failure"
Jan  1 15:05:28.117: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-1b21def1-7a47-44bd-be97-135ff41504a8 container client-container: 
STEP: delete the pod
Jan  1 15:05:28.263: INFO: Waiting for pod downwardapi-volume-1b21def1-7a47-44bd-be97-135ff41504a8 to disappear
Jan  1 15:05:28.324: INFO: Pod downwardapi-volume-1b21def1-7a47-44bd-be97-135ff41504a8 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 15:05:28.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-607" for this suite.
Jan  1 15:05:34.368: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 15:05:34.554: INFO: namespace downward-api-607 deletion completed in 6.217685912s

• [SLOW TEST:16.633 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 15:05:34.558: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod liveness-f0ef984c-b527-4221-9101-f56459292d34 in namespace container-probe-6292
Jan  1 15:05:42.783: INFO: Started pod liveness-f0ef984c-b527-4221-9101-f56459292d34 in namespace container-probe-6292
STEP: checking the pod's current state and verifying that restartCount is present
Jan  1 15:05:42.786: INFO: Initial restart count of pod liveness-f0ef984c-b527-4221-9101-f56459292d34 is 0
Jan  1 15:05:58.904: INFO: Restart count of pod container-probe-6292/liveness-f0ef984c-b527-4221-9101-f56459292d34 is now 1 (16.117525116s elapsed)
Jan  1 15:06:17.008: INFO: Restart count of pod container-probe-6292/liveness-f0ef984c-b527-4221-9101-f56459292d34 is now 2 (34.221515509s elapsed)
Jan  1 15:06:37.113: INFO: Restart count of pod container-probe-6292/liveness-f0ef984c-b527-4221-9101-f56459292d34 is now 3 (54.326968163s elapsed)
Jan  1 15:06:57.225: INFO: Restart count of pod container-probe-6292/liveness-f0ef984c-b527-4221-9101-f56459292d34 is now 4 (1m14.439150161s elapsed)
Jan  1 15:07:59.703: INFO: Restart count of pod container-probe-6292/liveness-f0ef984c-b527-4221-9101-f56459292d34 is now 5 (2m16.916643663s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 15:07:59.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-6292" for this suite.
Jan  1 15:08:05.843: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 15:08:05.998: INFO: namespace container-probe-6292 deletion completed in 6.234094272s

• [SLOW TEST:151.440 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 15:08:06.000: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Jan  1 15:08:06.074: INFO: PodSpec: initContainers in spec.initContainers
Jan  1 15:09:10.795: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-f801cd1a-0fba-43f4-b783-125e65e37de1", GenerateName:"", Namespace:"init-container-2985", SelfLink:"/api/v1/namespaces/init-container-2985/pods/pod-init-f801cd1a-0fba-43f4-b783-125e65e37de1", UID:"8d313276-dc42-49c2-b188-f1df7069f1a7", ResourceVersion:"18913178", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63713488086, loc:(*time.Location)(0x7ea48a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"74320650"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-nbmbj", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc00296b5c0), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-nbmbj", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-nbmbj", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-nbmbj", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002dadeb8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"iruya-node", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0026f64e0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002dadf40)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002dadf60)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc002dadf68), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc002dadf6c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713488086, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713488086, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713488086, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713488086, loc:(*time.Location)(0x7ea48a0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.3.65", PodIP:"10.44.0.1", StartTime:(*v1.Time)(0xc002860640), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0009ea4d0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0009ea540)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://97cdad9bd3514447533cd93cca26bae9ba910e04199a7ee4d7c5c2ab97e2c9bf"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002860680), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002860660), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 15:09:10.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-2985" for this suite.
Jan  1 15:09:32.891: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 15:09:33.007: INFO: namespace init-container-2985 deletion completed in 22.148060955s

• [SLOW TEST:87.007 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 15:09:33.007: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Jan  1 15:09:51.282: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan  1 15:09:51.294: INFO: Pod pod-with-poststart-http-hook still exists
Jan  1 15:09:53.294: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan  1 15:09:53.305: INFO: Pod pod-with-poststart-http-hook still exists
Jan  1 15:09:55.294: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan  1 15:09:55.308: INFO: Pod pod-with-poststart-http-hook still exists
Jan  1 15:09:57.294: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan  1 15:09:57.317: INFO: Pod pod-with-poststart-http-hook still exists
Jan  1 15:09:59.295: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan  1 15:09:59.310: INFO: Pod pod-with-poststart-http-hook still exists
Jan  1 15:10:01.295: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan  1 15:10:01.309: INFO: Pod pod-with-poststart-http-hook still exists
Jan  1 15:10:03.294: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan  1 15:10:03.305: INFO: Pod pod-with-poststart-http-hook still exists
Jan  1 15:10:05.294: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan  1 15:10:05.305: INFO: Pod pod-with-poststart-http-hook still exists
Jan  1 15:10:07.294: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan  1 15:10:07.305: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 15:10:07.305: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-7043" for this suite.
Jan  1 15:10:29.344: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 15:10:29.530: INFO: namespace container-lifecycle-hook-7043 deletion completed in 22.216366371s

• [SLOW TEST:56.523 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run --rm job 
  should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  1 15:10:29.532: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: executing a command with run --rm and attach with stdin
Jan  1 15:10:29.697: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6381 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed''
Jan  1 15:10:42.405: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\n"
Jan  1 15:10:42.405: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n"
STEP: verifying the job e2e-test-rm-busybox-job was deleted
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  1 15:10:44.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6381" for this suite.
Jan  1 15:10:50.461: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  1 15:10:50.577: INFO: namespace kubectl-6381 deletion completed in 6.145399457s

• [SLOW TEST:21.045 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run --rm job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a job from an image, then delete the job  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSJan  1 15:10:50.578: INFO: Running AfterSuite actions on all nodes
Jan  1 15:10:50.578: INFO: Running AfterSuite actions on node 1
Jan  1 15:10:50.578: INFO: Skipping dumping logs from cluster

Ran 215 of 4412 Specs in 8026.852 seconds
SUCCESS! -- 215 Passed | 0 Failed | 0 Pending | 4197 Skipped
PASS