I0428 12:55:54.094581 6 e2e.go:243] Starting e2e run "a859a4f3-c485-4786-8b9e-28db8dedbdc9" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1588078553 - Will randomize all specs Will run 215 of 4412 specs Apr 28 12:55:54.283: INFO: >>> kubeConfig: /root/.kube/config Apr 28 12:55:54.288: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Apr 28 12:55:54.308: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Apr 28 12:55:54.339: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Apr 28 12:55:54.339: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 28 12:55:54.340: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Apr 28 12:55:54.349: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Apr 28 12:55:54.349: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Apr 28 12:55:54.349: INFO: e2e test version: v1.15.11 Apr 28 12:55:54.350: INFO: kube-apiserver version: v1.15.7 SSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 12:55:54.351: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc Apr 28 12:55:54.432: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0428 12:56:24.989680 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 28 12:56:24.989: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 12:56:24.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7028" for this suite. Apr 28 12:56:33.011: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 12:56:33.091: INFO: namespace gc-7028 deletion completed in 8.098888849s • [SLOW TEST:38.740 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 12:56:33.091: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on node default medium Apr 28 12:56:33.157: INFO: Waiting up to 5m0s for pod "pod-47402396-9878-4409-9b32-50da17289db3" in namespace "emptydir-296" to be "success or failure" Apr 28 12:56:33.163: INFO: Pod "pod-47402396-9878-4409-9b32-50da17289db3": Phase="Pending", Reason="", readiness=false. Elapsed: 5.732553ms Apr 28 12:56:35.191: INFO: Pod "pod-47402396-9878-4409-9b32-50da17289db3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033880611s Apr 28 12:56:37.196: INFO: Pod "pod-47402396-9878-4409-9b32-50da17289db3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038349962s STEP: Saw pod success Apr 28 12:56:37.196: INFO: Pod "pod-47402396-9878-4409-9b32-50da17289db3" satisfied condition "success or failure" Apr 28 12:56:37.199: INFO: Trying to get logs from node iruya-worker2 pod pod-47402396-9878-4409-9b32-50da17289db3 container test-container: STEP: delete the pod Apr 28 12:56:37.250: INFO: Waiting for pod pod-47402396-9878-4409-9b32-50da17289db3 to disappear Apr 28 12:56:37.265: INFO: Pod pod-47402396-9878-4409-9b32-50da17289db3 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 12:56:37.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-296" for this suite. Apr 28 12:56:43.281: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 12:56:43.391: INFO: namespace emptydir-296 deletion completed in 6.122544747s • [SLOW TEST:10.300 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 12:56:43.392: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 28 12:56:43.470: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Apr 28 12:56:48.475: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Apr 28 12:56:48.475: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Apr 28 12:56:48.543: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:deployment-7079,SelfLink:/apis/apps/v1/namespaces/deployment-7079/deployments/test-cleanup-deployment,UID:6d193a8d-6f96-4bf3-bcdd-a96ec58535de,ResourceVersion:7891831,Generation:1,CreationTimestamp:2020-04-28 12:56:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},} Apr 28 12:56:48.556: INFO: New ReplicaSet "test-cleanup-deployment-55bbcbc84c" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c,GenerateName:,Namespace:deployment-7079,SelfLink:/apis/apps/v1/namespaces/deployment-7079/replicasets/test-cleanup-deployment-55bbcbc84c,UID:4a7bd5bb-1f6a-484b-98c6-2cc828b020f1,ResourceVersion:7891833,Generation:1,CreationTimestamp:2020-04-28 12:56:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 6d193a8d-6f96-4bf3-bcdd-a96ec58535de 0xc002ec58b7 0xc002ec58b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Apr 28 12:56:48.556: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Apr 28 12:56:48.557: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:deployment-7079,SelfLink:/apis/apps/v1/namespaces/deployment-7079/replicasets/test-cleanup-controller,UID:768d5e7e-4948-4c66-beb5-0cdfe6258dc6,ResourceVersion:7891832,Generation:1,CreationTimestamp:2020-04-28 12:56:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 6d193a8d-6f96-4bf3-bcdd-a96ec58535de 0xc002ec5747 0xc002ec5748}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Apr 28 12:56:48.714: INFO: Pod "test-cleanup-controller-4wbzn" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-4wbzn,GenerateName:test-cleanup-controller-,Namespace:deployment-7079,SelfLink:/api/v1/namespaces/deployment-7079/pods/test-cleanup-controller-4wbzn,UID:47516ed3-c68b-46ca-98ab-f68ab91bcb0f,ResourceVersion:7891826,Generation:0,CreationTimestamp:2020-04-28 12:56:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller 768d5e7e-4948-4c66-beb5-0cdfe6258dc6 0xc00274c4d7 0xc00274c4d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-s2pqr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-s2pqr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-s2pqr true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00274c550} {node.kubernetes.io/unreachable Exists NoExecute 0xc00274c570}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 12:56:43 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 12:56:46 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 12:56:46 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 12:56:43 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.224,StartTime:2020-04-28 12:56:43 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-04-28 12:56:45 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://4bf2ea72fc7d4b86bf4be37deec7b6e3e278f3ef19893e43d720d2aaf993354f}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 28 12:56:48.714: INFO: Pod "test-cleanup-deployment-55bbcbc84c-h4fzj" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c-h4fzj,GenerateName:test-cleanup-deployment-55bbcbc84c-,Namespace:deployment-7079,SelfLink:/api/v1/namespaces/deployment-7079/pods/test-cleanup-deployment-55bbcbc84c-h4fzj,UID:cfc84624-2f06-49e2-b370-ff45f3734d44,ResourceVersion:7891837,Generation:0,CreationTimestamp:2020-04-28 12:56:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-deployment-55bbcbc84c 4a7bd5bb-1f6a-484b-98c6-2cc828b020f1 0xc00274c657 0xc00274c658}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-s2pqr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-s2pqr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-s2pqr true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00274c6d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00274c6f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 12:56:48 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 12:56:48.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-7079" for this suite. Apr 28 12:56:54.761: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 12:56:54.860: INFO: namespace deployment-7079 deletion completed in 6.119192927s • [SLOW TEST:11.469 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 12:56:54.861: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 28 12:56:58.985: INFO: Waiting up to 5m0s for pod "client-envvars-e5f0aa52-ff69-4963-8b69-7092c1d0a546" in namespace "pods-4715" to be "success or failure" Apr 28 12:56:59.036: INFO: Pod "client-envvars-e5f0aa52-ff69-4963-8b69-7092c1d0a546": Phase="Pending", Reason="", readiness=false. Elapsed: 51.465032ms Apr 28 12:57:01.041: INFO: Pod "client-envvars-e5f0aa52-ff69-4963-8b69-7092c1d0a546": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056313056s Apr 28 12:57:03.046: INFO: Pod "client-envvars-e5f0aa52-ff69-4963-8b69-7092c1d0a546": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.061039495s STEP: Saw pod success Apr 28 12:57:03.046: INFO: Pod "client-envvars-e5f0aa52-ff69-4963-8b69-7092c1d0a546" satisfied condition "success or failure" Apr 28 12:57:03.050: INFO: Trying to get logs from node iruya-worker2 pod client-envvars-e5f0aa52-ff69-4963-8b69-7092c1d0a546 container env3cont: STEP: delete the pod Apr 28 12:57:03.076: INFO: Waiting for pod client-envvars-e5f0aa52-ff69-4963-8b69-7092c1d0a546 to disappear Apr 28 12:57:03.080: INFO: Pod client-envvars-e5f0aa52-ff69-4963-8b69-7092c1d0a546 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 12:57:03.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4715" for this suite. Apr 28 12:57:41.095: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 12:57:41.171: INFO: namespace pods-4715 deletion completed in 38.086611474s • [SLOW TEST:46.310 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 12:57:41.171: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Apr 28 12:57:41.265: INFO: Waiting up to 5m0s for pod "downward-api-63a83394-362d-4832-af7e-710f0bfa3dfa" in namespace "downward-api-9510" to be "success or failure" Apr 28 12:57:41.268: INFO: Pod "downward-api-63a83394-362d-4832-af7e-710f0bfa3dfa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.69522ms Apr 28 12:57:43.272: INFO: Pod "downward-api-63a83394-362d-4832-af7e-710f0bfa3dfa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006899313s Apr 28 12:57:45.276: INFO: Pod "downward-api-63a83394-362d-4832-af7e-710f0bfa3dfa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011042102s STEP: Saw pod success Apr 28 12:57:45.276: INFO: Pod "downward-api-63a83394-362d-4832-af7e-710f0bfa3dfa" satisfied condition "success or failure" Apr 28 12:57:45.280: INFO: Trying to get logs from node iruya-worker pod downward-api-63a83394-362d-4832-af7e-710f0bfa3dfa container dapi-container: STEP: delete the pod Apr 28 12:57:45.347: INFO: Waiting for pod downward-api-63a83394-362d-4832-af7e-710f0bfa3dfa to disappear Apr 28 12:57:45.351: INFO: Pod downward-api-63a83394-362d-4832-af7e-710f0bfa3dfa no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 12:57:45.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9510" for this suite. Apr 28 12:57:51.372: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 12:57:51.464: INFO: namespace downward-api-9510 deletion completed in 6.109794656s • [SLOW TEST:10.293 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 12:57:51.464: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 12:57:55.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-1925" for this suite. Apr 28 12:58:33.594: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 12:58:33.719: INFO: namespace kubelet-test-1925 deletion completed in 38.145629059s • [SLOW TEST:42.255 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 12:58:33.720: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Apr 28 12:58:39.795: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec pod-sharedvolume-4e2ff5da-aa8d-4f7f-911c-cdaf7cbea870 -c busybox-main-container --namespace=emptydir-9439 -- cat /usr/share/volumeshare/shareddata.txt' Apr 28 12:58:42.237: INFO: stderr: "I0428 12:58:42.125577 35 log.go:172] (0xc000116e70) (0xc00078caa0) Create stream\nI0428 12:58:42.125635 35 log.go:172] (0xc000116e70) (0xc00078caa0) Stream added, broadcasting: 1\nI0428 12:58:42.127840 35 log.go:172] (0xc000116e70) Reply frame received for 1\nI0428 12:58:42.127894 35 log.go:172] (0xc000116e70) (0xc000774000) Create stream\nI0428 12:58:42.127920 35 log.go:172] (0xc000116e70) (0xc000774000) Stream added, broadcasting: 3\nI0428 12:58:42.128837 35 log.go:172] (0xc000116e70) Reply frame received for 3\nI0428 12:58:42.128882 35 log.go:172] (0xc000116e70) (0xc000978000) Create stream\nI0428 12:58:42.128899 35 log.go:172] (0xc000116e70) (0xc000978000) Stream added, broadcasting: 5\nI0428 12:58:42.129794 35 log.go:172] (0xc000116e70) Reply frame received for 5\nI0428 12:58:42.229006 35 log.go:172] (0xc000116e70) Data frame received for 5\nI0428 12:58:42.229064 35 log.go:172] (0xc000978000) (5) Data frame handling\nI0428 12:58:42.229105 35 log.go:172] (0xc000116e70) Data frame received for 3\nI0428 12:58:42.229285 35 log.go:172] (0xc000774000) (3) Data frame handling\nI0428 12:58:42.229316 35 log.go:172] (0xc000774000) (3) Data frame sent\nI0428 12:58:42.229332 35 log.go:172] (0xc000116e70) Data frame received for 3\nI0428 12:58:42.229346 35 log.go:172] (0xc000774000) (3) Data frame handling\nI0428 12:58:42.230630 35 log.go:172] (0xc000116e70) Data frame received for 1\nI0428 12:58:42.230653 35 log.go:172] (0xc00078caa0) (1) Data frame handling\nI0428 12:58:42.230664 35 log.go:172] (0xc00078caa0) (1) Data frame sent\nI0428 12:58:42.230676 35 log.go:172] (0xc000116e70) (0xc00078caa0) Stream removed, broadcasting: 1\nI0428 12:58:42.230695 35 log.go:172] (0xc000116e70) Go away received\nI0428 12:58:42.231232 35 log.go:172] (0xc000116e70) (0xc00078caa0) Stream removed, broadcasting: 1\nI0428 12:58:42.231264 35 log.go:172] (0xc000116e70) (0xc000774000) Stream removed, broadcasting: 3\nI0428 12:58:42.231277 35 log.go:172] (0xc000116e70) (0xc000978000) Stream removed, broadcasting: 5\n" Apr 28 12:58:42.237: INFO: stdout: "Hello from the busy-box sub-container\n" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 12:58:42.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9439" for this suite. Apr 28 12:58:48.252: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 12:58:48.343: INFO: namespace emptydir-9439 deletion completed in 6.101674906s • [SLOW TEST:14.623 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 12:58:48.344: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap that has name configmap-test-emptyKey-ea1e26d0-5c77-4185-844e-467648e696ea [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 12:58:48.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1208" for this suite. Apr 28 12:58:54.448: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 12:58:54.550: INFO: namespace configmap-1208 deletion completed in 6.1312575s • [SLOW TEST:6.206 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 12:58:54.550: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Apr 28 12:58:54.664: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 12:58:54.684: INFO: Number of nodes with available pods: 0 Apr 28 12:58:54.684: INFO: Node iruya-worker is running more than one daemon pod Apr 28 12:58:55.688: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 12:58:55.692: INFO: Number of nodes with available pods: 0 Apr 28 12:58:55.692: INFO: Node iruya-worker is running more than one daemon pod Apr 28 12:58:56.689: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 12:58:56.693: INFO: Number of nodes with available pods: 0 Apr 28 12:58:56.693: INFO: Node iruya-worker is running more than one daemon pod Apr 28 12:58:57.691: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 12:58:57.694: INFO: Number of nodes with available pods: 0 Apr 28 12:58:57.694: INFO: Node iruya-worker is running more than one daemon pod Apr 28 12:58:58.689: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 12:58:58.693: INFO: Number of nodes with available pods: 2 Apr 28 12:58:58.693: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Apr 28 12:58:58.713: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 12:58:58.719: INFO: Number of nodes with available pods: 2 Apr 28 12:58:58.719: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8314, will wait for the garbage collector to delete the pods Apr 28 12:58:59.891: INFO: Deleting DaemonSet.extensions daemon-set took: 61.573172ms Apr 28 12:59:00.191: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.298643ms Apr 28 12:59:03.014: INFO: Number of nodes with available pods: 0 Apr 28 12:59:03.014: INFO: Number of running nodes: 0, number of available pods: 0 Apr 28 12:59:03.020: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8314/daemonsets","resourceVersion":"7892316"},"items":null} Apr 28 12:59:03.023: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8314/pods","resourceVersion":"7892316"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 12:59:03.033: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-8314" for this suite. Apr 28 12:59:09.052: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 12:59:09.124: INFO: namespace daemonsets-8314 deletion completed in 6.087975934s • [SLOW TEST:14.574 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 12:59:09.124: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 28 12:59:09.182: INFO: Waiting up to 5m0s for pod "downwardapi-volume-87821884-bd4d-4605-817d-3365f6100b84" in namespace "projected-9622" to be "success or failure" Apr 28 12:59:09.198: INFO: Pod "downwardapi-volume-87821884-bd4d-4605-817d-3365f6100b84": Phase="Pending", Reason="", readiness=false. Elapsed: 16.489472ms Apr 28 12:59:11.202: INFO: Pod "downwardapi-volume-87821884-bd4d-4605-817d-3365f6100b84": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020567638s Apr 28 12:59:13.207: INFO: Pod "downwardapi-volume-87821884-bd4d-4605-817d-3365f6100b84": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025149102s STEP: Saw pod success Apr 28 12:59:13.207: INFO: Pod "downwardapi-volume-87821884-bd4d-4605-817d-3365f6100b84" satisfied condition "success or failure" Apr 28 12:59:13.211: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-87821884-bd4d-4605-817d-3365f6100b84 container client-container: STEP: delete the pod Apr 28 12:59:13.271: INFO: Waiting for pod downwardapi-volume-87821884-bd4d-4605-817d-3365f6100b84 to disappear Apr 28 12:59:13.292: INFO: Pod downwardapi-volume-87821884-bd4d-4605-817d-3365f6100b84 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 12:59:13.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9622" for this suite. Apr 28 12:59:19.332: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 12:59:19.412: INFO: namespace projected-9622 deletion completed in 6.116509586s • [SLOW TEST:10.288 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 12:59:19.413: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-879358fa-6ce0-45c0-b5f8-f8f31d845fd6 STEP: Creating secret with name s-test-opt-upd-7cf90aca-17bb-4d83-b044-ade58479a892 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-879358fa-6ce0-45c0-b5f8-f8f31d845fd6 STEP: Updating secret s-test-opt-upd-7cf90aca-17bb-4d83-b044-ade58479a892 STEP: Creating secret with name s-test-opt-create-c32df541-9d04-432b-b6f5-661540eaaa82 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 12:59:29.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-234" for this suite. Apr 28 12:59:49.675: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 12:59:49.754: INFO: namespace secrets-234 deletion completed in 20.096731409s • [SLOW TEST:30.341 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 12:59:49.755: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Apr 28 12:59:49.885: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-7067,SelfLink:/api/v1/namespaces/watch-7067/configmaps/e2e-watch-test-resource-version,UID:b097602f-ff13-434e-95e9-8d00d5600cfb,ResourceVersion:7892503,Generation:0,CreationTimestamp:2020-04-28 12:59:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Apr 28 12:59:49.885: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-7067,SelfLink:/api/v1/namespaces/watch-7067/configmaps/e2e-watch-test-resource-version,UID:b097602f-ff13-434e-95e9-8d00d5600cfb,ResourceVersion:7892504,Generation:0,CreationTimestamp:2020-04-28 12:59:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 12:59:49.886: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-7067" for this suite. Apr 28 12:59:55.898: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 12:59:56.007: INFO: namespace watch-7067 deletion completed in 6.118179743s • [SLOW TEST:6.252 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 12:59:56.007: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-d24e0825-e02b-4240-aede-137d17a252dc STEP: Creating a pod to test consume configMaps Apr 28 12:59:56.122: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-499d7b8e-2d5d-40d4-b914-00216348fa9a" in namespace "projected-7240" to be "success or failure" Apr 28 12:59:56.125: INFO: Pod "pod-projected-configmaps-499d7b8e-2d5d-40d4-b914-00216348fa9a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.539849ms Apr 28 12:59:58.198: INFO: Pod "pod-projected-configmaps-499d7b8e-2d5d-40d4-b914-00216348fa9a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.076137948s Apr 28 13:00:00.296: INFO: Pod "pod-projected-configmaps-499d7b8e-2d5d-40d4-b914-00216348fa9a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.174671869s STEP: Saw pod success Apr 28 13:00:00.297: INFO: Pod "pod-projected-configmaps-499d7b8e-2d5d-40d4-b914-00216348fa9a" satisfied condition "success or failure" Apr 28 13:00:00.300: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-499d7b8e-2d5d-40d4-b914-00216348fa9a container projected-configmap-volume-test: STEP: delete the pod Apr 28 13:00:00.349: INFO: Waiting for pod pod-projected-configmaps-499d7b8e-2d5d-40d4-b914-00216348fa9a to disappear Apr 28 13:00:00.378: INFO: Pod pod-projected-configmaps-499d7b8e-2d5d-40d4-b914-00216348fa9a no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 13:00:00.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7240" for this suite. Apr 28 13:00:06.399: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:00:06.475: INFO: namespace projected-7240 deletion completed in 6.093236608s • [SLOW TEST:10.468 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 13:00:06.477: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-2268 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a new StatefulSet Apr 28 13:00:06.581: INFO: Found 0 stateful pods, waiting for 3 Apr 28 13:00:16.608: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 28 13:00:16.608: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 28 13:00:16.608: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Apr 28 13:00:26.586: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 28 13:00:26.586: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 28 13:00:26.586: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Apr 28 13:00:26.596: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2268 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Apr 28 13:00:26.883: INFO: stderr: "I0428 13:00:26.734099 66 log.go:172] (0xc0009de420) (0xc0003e2820) Create stream\nI0428 13:00:26.734158 66 log.go:172] (0xc0009de420) (0xc0003e2820) Stream added, broadcasting: 1\nI0428 13:00:26.737299 66 log.go:172] (0xc0009de420) Reply frame received for 1\nI0428 13:00:26.737343 66 log.go:172] (0xc0009de420) (0xc0003e2000) Create stream\nI0428 13:00:26.737354 66 log.go:172] (0xc0009de420) (0xc0003e2000) Stream added, broadcasting: 3\nI0428 13:00:26.738205 66 log.go:172] (0xc0009de420) Reply frame received for 3\nI0428 13:00:26.738242 66 log.go:172] (0xc0009de420) (0xc0003141e0) Create stream\nI0428 13:00:26.738255 66 log.go:172] (0xc0009de420) (0xc0003141e0) Stream added, broadcasting: 5\nI0428 13:00:26.739257 66 log.go:172] (0xc0009de420) Reply frame received for 5\nI0428 13:00:26.824927 66 log.go:172] (0xc0009de420) Data frame received for 5\nI0428 13:00:26.824966 66 log.go:172] (0xc0003141e0) (5) Data frame handling\nI0428 13:00:26.824996 66 log.go:172] (0xc0003141e0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0428 13:00:26.875322 66 log.go:172] (0xc0009de420) Data frame received for 3\nI0428 13:00:26.875372 66 log.go:172] (0xc0003e2000) (3) Data frame handling\nI0428 13:00:26.875397 66 log.go:172] (0xc0003e2000) (3) Data frame sent\nI0428 13:00:26.875551 66 log.go:172] (0xc0009de420) Data frame received for 3\nI0428 13:00:26.875571 66 log.go:172] (0xc0003e2000) (3) Data frame handling\nI0428 13:00:26.875611 66 log.go:172] (0xc0009de420) Data frame received for 5\nI0428 13:00:26.875627 66 log.go:172] (0xc0003141e0) (5) Data frame handling\nI0428 13:00:26.877351 66 log.go:172] (0xc0009de420) Data frame received for 1\nI0428 13:00:26.877457 66 log.go:172] (0xc0003e2820) (1) Data frame handling\nI0428 13:00:26.877472 66 log.go:172] (0xc0003e2820) (1) Data frame sent\nI0428 13:00:26.877686 66 log.go:172] (0xc0009de420) (0xc0003e2820) Stream removed, broadcasting: 1\nI0428 13:00:26.877718 66 log.go:172] (0xc0009de420) Go away received\nI0428 13:00:26.877990 66 log.go:172] (0xc0009de420) (0xc0003e2820) Stream removed, broadcasting: 1\nI0428 13:00:26.878005 66 log.go:172] (0xc0009de420) (0xc0003e2000) Stream removed, broadcasting: 3\nI0428 13:00:26.878011 66 log.go:172] (0xc0009de420) (0xc0003141e0) Stream removed, broadcasting: 5\n" Apr 28 13:00:26.883: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Apr 28 13:00:26.883: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Apr 28 13:00:36.964: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Apr 28 13:00:47.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2268 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 28 13:00:47.283: INFO: stderr: "I0428 13:00:47.163330 87 log.go:172] (0xc00044c630) (0xc0005f6aa0) Create stream\nI0428 13:00:47.163385 87 log.go:172] (0xc00044c630) (0xc0005f6aa0) Stream added, broadcasting: 1\nI0428 13:00:47.166510 87 log.go:172] (0xc00044c630) Reply frame received for 1\nI0428 13:00:47.166709 87 log.go:172] (0xc00044c630) (0xc000722000) Create stream\nI0428 13:00:47.166813 87 log.go:172] (0xc00044c630) (0xc000722000) Stream added, broadcasting: 3\nI0428 13:00:47.167941 87 log.go:172] (0xc00044c630) Reply frame received for 3\nI0428 13:00:47.167987 87 log.go:172] (0xc00044c630) (0xc0005f61e0) Create stream\nI0428 13:00:47.168004 87 log.go:172] (0xc00044c630) (0xc0005f61e0) Stream added, broadcasting: 5\nI0428 13:00:47.168889 87 log.go:172] (0xc00044c630) Reply frame received for 5\nI0428 13:00:47.275294 87 log.go:172] (0xc00044c630) Data frame received for 5\nI0428 13:00:47.275323 87 log.go:172] (0xc0005f61e0) (5) Data frame handling\nI0428 13:00:47.275335 87 log.go:172] (0xc0005f61e0) (5) Data frame sent\nI0428 13:00:47.275342 87 log.go:172] (0xc00044c630) Data frame received for 5\nI0428 13:00:47.275346 87 log.go:172] (0xc0005f61e0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0428 13:00:47.275399 87 log.go:172] (0xc00044c630) Data frame received for 3\nI0428 13:00:47.275441 87 log.go:172] (0xc000722000) (3) Data frame handling\nI0428 13:00:47.275457 87 log.go:172] (0xc000722000) (3) Data frame sent\nI0428 13:00:47.275489 87 log.go:172] (0xc00044c630) Data frame received for 3\nI0428 13:00:47.275519 87 log.go:172] (0xc000722000) (3) Data frame handling\nI0428 13:00:47.277969 87 log.go:172] (0xc00044c630) Data frame received for 1\nI0428 13:00:47.277992 87 log.go:172] (0xc0005f6aa0) (1) Data frame handling\nI0428 13:00:47.278004 87 log.go:172] (0xc0005f6aa0) (1) Data frame sent\nI0428 13:00:47.278024 87 log.go:172] (0xc00044c630) (0xc0005f6aa0) Stream removed, broadcasting: 1\nI0428 13:00:47.278045 87 log.go:172] (0xc00044c630) Go away received\nI0428 13:00:47.278523 87 log.go:172] (0xc00044c630) (0xc0005f6aa0) Stream removed, broadcasting: 1\nI0428 13:00:47.278542 87 log.go:172] (0xc00044c630) (0xc000722000) Stream removed, broadcasting: 3\nI0428 13:00:47.278551 87 log.go:172] (0xc00044c630) (0xc0005f61e0) Stream removed, broadcasting: 5\n" Apr 28 13:00:47.283: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Apr 28 13:00:47.283: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Apr 28 13:01:07.304: INFO: Waiting for StatefulSet statefulset-2268/ss2 to complete update Apr 28 13:01:07.304: INFO: Waiting for Pod statefulset-2268/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Rolling back to a previous revision Apr 28 13:01:17.312: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2268 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Apr 28 13:01:17.564: INFO: stderr: "I0428 13:01:17.453764 107 log.go:172] (0xc000920420) (0xc00042c6e0) Create stream\nI0428 13:01:17.453825 107 log.go:172] (0xc000920420) (0xc00042c6e0) Stream added, broadcasting: 1\nI0428 13:01:17.455948 107 log.go:172] (0xc000920420) Reply frame received for 1\nI0428 13:01:17.455999 107 log.go:172] (0xc000920420) (0xc00085c000) Create stream\nI0428 13:01:17.456017 107 log.go:172] (0xc000920420) (0xc00085c000) Stream added, broadcasting: 3\nI0428 13:01:17.456999 107 log.go:172] (0xc000920420) Reply frame received for 3\nI0428 13:01:17.457035 107 log.go:172] (0xc000920420) (0xc000922000) Create stream\nI0428 13:01:17.457048 107 log.go:172] (0xc000920420) (0xc000922000) Stream added, broadcasting: 5\nI0428 13:01:17.458117 107 log.go:172] (0xc000920420) Reply frame received for 5\nI0428 13:01:17.522194 107 log.go:172] (0xc000920420) Data frame received for 5\nI0428 13:01:17.522229 107 log.go:172] (0xc000922000) (5) Data frame handling\nI0428 13:01:17.522248 107 log.go:172] (0xc000922000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0428 13:01:17.554459 107 log.go:172] (0xc000920420) Data frame received for 3\nI0428 13:01:17.554501 107 log.go:172] (0xc00085c000) (3) Data frame handling\nI0428 13:01:17.554544 107 log.go:172] (0xc00085c000) (3) Data frame sent\nI0428 13:01:17.554630 107 log.go:172] (0xc000920420) Data frame received for 3\nI0428 13:01:17.554659 107 log.go:172] (0xc00085c000) (3) Data frame handling\nI0428 13:01:17.554976 107 log.go:172] (0xc000920420) Data frame received for 5\nI0428 13:01:17.555017 107 log.go:172] (0xc000922000) (5) Data frame handling\nI0428 13:01:17.556676 107 log.go:172] (0xc000920420) Data frame received for 1\nI0428 13:01:17.556704 107 log.go:172] (0xc00042c6e0) (1) Data frame handling\nI0428 13:01:17.556765 107 log.go:172] (0xc00042c6e0) (1) Data frame sent\nI0428 13:01:17.556810 107 log.go:172] (0xc000920420) (0xc00042c6e0) Stream removed, broadcasting: 1\nI0428 13:01:17.556831 107 log.go:172] (0xc000920420) Go away received\nI0428 13:01:17.559515 107 log.go:172] (0xc000920420) (0xc00042c6e0) Stream removed, broadcasting: 1\nI0428 13:01:17.559551 107 log.go:172] (0xc000920420) (0xc00085c000) Stream removed, broadcasting: 3\nI0428 13:01:17.559565 107 log.go:172] (0xc000920420) (0xc000922000) Stream removed, broadcasting: 5\n" Apr 28 13:01:17.564: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Apr 28 13:01:17.564: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Apr 28 13:01:27.596: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Apr 28 13:01:37.647: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2268 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 28 13:01:37.880: INFO: stderr: "I0428 13:01:37.795259 127 log.go:172] (0xc00061c420) (0xc0007fa640) Create stream\nI0428 13:01:37.795321 127 log.go:172] (0xc00061c420) (0xc0007fa640) Stream added, broadcasting: 1\nI0428 13:01:37.798920 127 log.go:172] (0xc00061c420) Reply frame received for 1\nI0428 13:01:37.798965 127 log.go:172] (0xc00061c420) (0xc0005e2280) Create stream\nI0428 13:01:37.798979 127 log.go:172] (0xc00061c420) (0xc0005e2280) Stream added, broadcasting: 3\nI0428 13:01:37.800069 127 log.go:172] (0xc00061c420) Reply frame received for 3\nI0428 13:01:37.800134 127 log.go:172] (0xc00061c420) (0xc0007fa6e0) Create stream\nI0428 13:01:37.800159 127 log.go:172] (0xc00061c420) (0xc0007fa6e0) Stream added, broadcasting: 5\nI0428 13:01:37.801423 127 log.go:172] (0xc00061c420) Reply frame received for 5\nI0428 13:01:37.873265 127 log.go:172] (0xc00061c420) Data frame received for 5\nI0428 13:01:37.873375 127 log.go:172] (0xc0007fa6e0) (5) Data frame handling\nI0428 13:01:37.873393 127 log.go:172] (0xc0007fa6e0) (5) Data frame sent\nI0428 13:01:37.873403 127 log.go:172] (0xc00061c420) Data frame received for 5\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0428 13:01:37.873420 127 log.go:172] (0xc00061c420) Data frame received for 3\nI0428 13:01:37.873438 127 log.go:172] (0xc0005e2280) (3) Data frame handling\nI0428 13:01:37.873444 127 log.go:172] (0xc0005e2280) (3) Data frame sent\nI0428 13:01:37.873450 127 log.go:172] (0xc00061c420) Data frame received for 3\nI0428 13:01:37.873457 127 log.go:172] (0xc0005e2280) (3) Data frame handling\nI0428 13:01:37.873485 127 log.go:172] (0xc0007fa6e0) (5) Data frame handling\nI0428 13:01:37.875183 127 log.go:172] (0xc00061c420) Data frame received for 1\nI0428 13:01:37.875194 127 log.go:172] (0xc0007fa640) (1) Data frame handling\nI0428 13:01:37.875204 127 log.go:172] (0xc0007fa640) (1) Data frame sent\nI0428 13:01:37.875304 127 log.go:172] (0xc00061c420) (0xc0007fa640) Stream removed, broadcasting: 1\nI0428 13:01:37.875407 127 log.go:172] (0xc00061c420) Go away received\nI0428 13:01:37.875772 127 log.go:172] (0xc00061c420) (0xc0007fa640) Stream removed, broadcasting: 1\nI0428 13:01:37.875802 127 log.go:172] (0xc00061c420) (0xc0005e2280) Stream removed, broadcasting: 3\nI0428 13:01:37.875820 127 log.go:172] (0xc00061c420) (0xc0007fa6e0) Stream removed, broadcasting: 5\n" Apr 28 13:01:37.881: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Apr 28 13:01:37.881: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Apr 28 13:01:57.902: INFO: Deleting all statefulset in ns statefulset-2268 Apr 28 13:01:57.904: INFO: Scaling statefulset ss2 to 0 Apr 28 13:02:07.923: INFO: Waiting for statefulset status.replicas updated to 0 Apr 28 13:02:07.926: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 13:02:07.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2268" for this suite. Apr 28 13:02:13.960: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:02:14.038: INFO: namespace statefulset-2268 deletion completed in 6.087761721s • [SLOW TEST:127.561 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 13:02:14.039: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-map-31165e57-443b-4efd-b068-4c0c95f4eda0 STEP: Creating a pod to test consume secrets Apr 28 13:02:14.110: INFO: Waiting up to 5m0s for pod "pod-secrets-0bd12fd4-2149-4698-944b-d285d8378eeb" in namespace "secrets-1201" to be "success or failure" Apr 28 13:02:14.132: INFO: Pod "pod-secrets-0bd12fd4-2149-4698-944b-d285d8378eeb": Phase="Pending", Reason="", readiness=false. Elapsed: 22.776566ms Apr 28 13:02:16.136: INFO: Pod "pod-secrets-0bd12fd4-2149-4698-944b-d285d8378eeb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026715269s Apr 28 13:02:18.141: INFO: Pod "pod-secrets-0bd12fd4-2149-4698-944b-d285d8378eeb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031462392s STEP: Saw pod success Apr 28 13:02:18.141: INFO: Pod "pod-secrets-0bd12fd4-2149-4698-944b-d285d8378eeb" satisfied condition "success or failure" Apr 28 13:02:18.144: INFO: Trying to get logs from node iruya-worker pod pod-secrets-0bd12fd4-2149-4698-944b-d285d8378eeb container secret-volume-test: STEP: delete the pod Apr 28 13:02:18.180: INFO: Waiting for pod pod-secrets-0bd12fd4-2149-4698-944b-d285d8378eeb to disappear Apr 28 13:02:18.194: INFO: Pod pod-secrets-0bd12fd4-2149-4698-944b-d285d8378eeb no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 13:02:18.194: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1201" for this suite. Apr 28 13:02:24.216: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:02:24.296: INFO: namespace secrets-1201 deletion completed in 6.097892889s • [SLOW TEST:10.257 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 13:02:24.296: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on tmpfs Apr 28 13:02:24.370: INFO: Waiting up to 5m0s for pod "pod-c74b303b-da1b-4976-a184-81da44401deb" in namespace "emptydir-8970" to be "success or failure" Apr 28 13:02:24.392: INFO: Pod "pod-c74b303b-da1b-4976-a184-81da44401deb": Phase="Pending", Reason="", readiness=false. Elapsed: 22.073384ms Apr 28 13:02:26.396: INFO: Pod "pod-c74b303b-da1b-4976-a184-81da44401deb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026420872s Apr 28 13:02:28.401: INFO: Pod "pod-c74b303b-da1b-4976-a184-81da44401deb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030868931s STEP: Saw pod success Apr 28 13:02:28.401: INFO: Pod "pod-c74b303b-da1b-4976-a184-81da44401deb" satisfied condition "success or failure" Apr 28 13:02:28.404: INFO: Trying to get logs from node iruya-worker2 pod pod-c74b303b-da1b-4976-a184-81da44401deb container test-container: STEP: delete the pod Apr 28 13:02:28.447: INFO: Waiting for pod pod-c74b303b-da1b-4976-a184-81da44401deb to disappear Apr 28 13:02:28.458: INFO: Pod pod-c74b303b-da1b-4976-a184-81da44401deb no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 13:02:28.458: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8970" for this suite. Apr 28 13:02:34.473: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:02:34.547: INFO: namespace emptydir-8970 deletion completed in 6.086323044s • [SLOW TEST:10.252 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 13:02:34.548: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token STEP: reading a file in the container Apr 28 13:02:39.181: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-6589 pod-service-account-44ffa804-fe7d-4ab2-93ab-125fec12b3f4 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Apr 28 13:02:39.419: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-6589 pod-service-account-44ffa804-fe7d-4ab2-93ab-125fec12b3f4 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Apr 28 13:02:39.619: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-6589 pod-service-account-44ffa804-fe7d-4ab2-93ab-125fec12b3f4 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 13:02:39.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-6589" for this suite. Apr 28 13:02:45.871: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:02:45.951: INFO: namespace svcaccounts-6589 deletion completed in 6.115461054s • [SLOW TEST:11.403 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 13:02:45.952: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-bn6k STEP: Creating a pod to test atomic-volume-subpath Apr 28 13:02:46.044: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-bn6k" in namespace "subpath-4458" to be "success or failure" Apr 28 13:02:46.049: INFO: Pod "pod-subpath-test-configmap-bn6k": Phase="Pending", Reason="", readiness=false. Elapsed: 4.364775ms Apr 28 13:02:48.052: INFO: Pod "pod-subpath-test-configmap-bn6k": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007965005s Apr 28 13:02:50.057: INFO: Pod "pod-subpath-test-configmap-bn6k": Phase="Running", Reason="", readiness=true. Elapsed: 4.012345549s Apr 28 13:02:52.061: INFO: Pod "pod-subpath-test-configmap-bn6k": Phase="Running", Reason="", readiness=true. Elapsed: 6.016696539s Apr 28 13:02:54.066: INFO: Pod "pod-subpath-test-configmap-bn6k": Phase="Running", Reason="", readiness=true. Elapsed: 8.021119157s Apr 28 13:02:56.070: INFO: Pod "pod-subpath-test-configmap-bn6k": Phase="Running", Reason="", readiness=true. Elapsed: 10.0254988s Apr 28 13:02:58.074: INFO: Pod "pod-subpath-test-configmap-bn6k": Phase="Running", Reason="", readiness=true. Elapsed: 12.029452458s Apr 28 13:03:00.078: INFO: Pod "pod-subpath-test-configmap-bn6k": Phase="Running", Reason="", readiness=true. Elapsed: 14.034020507s Apr 28 13:03:02.083: INFO: Pod "pod-subpath-test-configmap-bn6k": Phase="Running", Reason="", readiness=true. Elapsed: 16.03831707s Apr 28 13:03:04.087: INFO: Pod "pod-subpath-test-configmap-bn6k": Phase="Running", Reason="", readiness=true. Elapsed: 18.042655978s Apr 28 13:03:06.091: INFO: Pod "pod-subpath-test-configmap-bn6k": Phase="Running", Reason="", readiness=true. Elapsed: 20.046117994s Apr 28 13:03:08.094: INFO: Pod "pod-subpath-test-configmap-bn6k": Phase="Running", Reason="", readiness=true. Elapsed: 22.049980802s Apr 28 13:03:10.099: INFO: Pod "pod-subpath-test-configmap-bn6k": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.054583722s STEP: Saw pod success Apr 28 13:03:10.099: INFO: Pod "pod-subpath-test-configmap-bn6k" satisfied condition "success or failure" Apr 28 13:03:10.102: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-configmap-bn6k container test-container-subpath-configmap-bn6k: STEP: delete the pod Apr 28 13:03:10.124: INFO: Waiting for pod pod-subpath-test-configmap-bn6k to disappear Apr 28 13:03:10.142: INFO: Pod pod-subpath-test-configmap-bn6k no longer exists STEP: Deleting pod pod-subpath-test-configmap-bn6k Apr 28 13:03:10.142: INFO: Deleting pod "pod-subpath-test-configmap-bn6k" in namespace "subpath-4458" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 13:03:10.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-4458" for this suite. Apr 28 13:03:16.162: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:03:16.241: INFO: namespace subpath-4458 deletion completed in 6.094501237s • [SLOW TEST:30.289 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 13:03:16.242: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Apr 28 13:03:20.902: INFO: Successfully updated pod "annotationupdate17de8219-4a7a-4386-8a36-cd21f3328b02" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 13:03:22.942: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6078" for this suite. Apr 28 13:03:44.978: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:03:45.061: INFO: namespace downward-api-6078 deletion completed in 22.11475028s • [SLOW TEST:28.820 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 13:03:45.061: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-7stf STEP: Creating a pod to test atomic-volume-subpath Apr 28 13:03:45.189: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-7stf" in namespace "subpath-1060" to be "success or failure" Apr 28 13:03:45.194: INFO: Pod "pod-subpath-test-configmap-7stf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.212906ms Apr 28 13:03:47.197: INFO: Pod "pod-subpath-test-configmap-7stf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007638991s Apr 28 13:03:49.201: INFO: Pod "pod-subpath-test-configmap-7stf": Phase="Running", Reason="", readiness=true. Elapsed: 4.011771213s Apr 28 13:03:51.206: INFO: Pod "pod-subpath-test-configmap-7stf": Phase="Running", Reason="", readiness=true. Elapsed: 6.016241366s Apr 28 13:03:53.210: INFO: Pod "pod-subpath-test-configmap-7stf": Phase="Running", Reason="", readiness=true. Elapsed: 8.02050735s Apr 28 13:03:55.215: INFO: Pod "pod-subpath-test-configmap-7stf": Phase="Running", Reason="", readiness=true. Elapsed: 10.025102913s Apr 28 13:03:57.219: INFO: Pod "pod-subpath-test-configmap-7stf": Phase="Running", Reason="", readiness=true. Elapsed: 12.029509375s Apr 28 13:03:59.223: INFO: Pod "pod-subpath-test-configmap-7stf": Phase="Running", Reason="", readiness=true. Elapsed: 14.03329974s Apr 28 13:04:01.227: INFO: Pod "pod-subpath-test-configmap-7stf": Phase="Running", Reason="", readiness=true. Elapsed: 16.037542315s Apr 28 13:04:03.231: INFO: Pod "pod-subpath-test-configmap-7stf": Phase="Running", Reason="", readiness=true. Elapsed: 18.041305451s Apr 28 13:04:05.234: INFO: Pod "pod-subpath-test-configmap-7stf": Phase="Running", Reason="", readiness=true. Elapsed: 20.044689273s Apr 28 13:04:07.238: INFO: Pod "pod-subpath-test-configmap-7stf": Phase="Running", Reason="", readiness=true. Elapsed: 22.04881809s Apr 28 13:04:09.243: INFO: Pod "pod-subpath-test-configmap-7stf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.053262174s STEP: Saw pod success Apr 28 13:04:09.243: INFO: Pod "pod-subpath-test-configmap-7stf" satisfied condition "success or failure" Apr 28 13:04:09.246: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-configmap-7stf container test-container-subpath-configmap-7stf: STEP: delete the pod Apr 28 13:04:09.269: INFO: Waiting for pod pod-subpath-test-configmap-7stf to disappear Apr 28 13:04:09.290: INFO: Pod pod-subpath-test-configmap-7stf no longer exists STEP: Deleting pod pod-subpath-test-configmap-7stf Apr 28 13:04:09.290: INFO: Deleting pod "pod-subpath-test-configmap-7stf" in namespace "subpath-1060" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 13:04:09.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-1060" for this suite. Apr 28 13:04:15.307: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:04:15.394: INFO: namespace subpath-1060 deletion completed in 6.097524247s • [SLOW TEST:30.333 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 13:04:15.395: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1557 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Apr 28 13:04:15.425: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/apps.v1 --namespace=kubectl-5734' Apr 28 13:04:15.528: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Apr 28 13:04:15.528: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the deployment e2e-test-nginx-deployment was created STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created [AfterEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1562 Apr 28 13:04:17.618: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-5734' Apr 28 13:04:17.734: INFO: stderr: "" Apr 28 13:04:17.734: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 13:04:17.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5734" for this suite. Apr 28 13:06:13.773: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:06:13.900: INFO: namespace kubectl-5734 deletion completed in 1m56.162923961s • [SLOW TEST:118.505 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 13:06:13.900: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 28 13:06:13.963: INFO: Creating ReplicaSet my-hostname-basic-099f8a81-e69e-4c8c-a519-b4a81fdb1f02 Apr 28 13:06:14.002: INFO: Pod name my-hostname-basic-099f8a81-e69e-4c8c-a519-b4a81fdb1f02: Found 0 pods out of 1 Apr 28 13:06:19.007: INFO: Pod name my-hostname-basic-099f8a81-e69e-4c8c-a519-b4a81fdb1f02: Found 1 pods out of 1 Apr 28 13:06:19.007: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-099f8a81-e69e-4c8c-a519-b4a81fdb1f02" is running Apr 28 13:06:19.010: INFO: Pod "my-hostname-basic-099f8a81-e69e-4c8c-a519-b4a81fdb1f02-thpf2" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-28 13:06:14 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-28 13:06:16 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-28 13:06:16 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-28 13:06:14 +0000 UTC Reason: Message:}]) Apr 28 13:06:19.010: INFO: Trying to dial the pod Apr 28 13:06:24.032: INFO: Controller my-hostname-basic-099f8a81-e69e-4c8c-a519-b4a81fdb1f02: Got expected result from replica 1 [my-hostname-basic-099f8a81-e69e-4c8c-a519-b4a81fdb1f02-thpf2]: "my-hostname-basic-099f8a81-e69e-4c8c-a519-b4a81fdb1f02-thpf2", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 13:06:24.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-5749" for this suite. Apr 28 13:06:30.052: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:06:30.132: INFO: namespace replicaset-5749 deletion completed in 6.096244446s • [SLOW TEST:16.231 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 13:06:30.132: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-3bcec8ed-ab23-42d5-ba98-4966b7a7c1d8 STEP: Creating a pod to test consume configMaps Apr 28 13:06:30.242: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-462fb9cd-c46f-4af7-ba12-995c2d18dc6d" in namespace "projected-6372" to be "success or failure" Apr 28 13:06:30.246: INFO: Pod "pod-projected-configmaps-462fb9cd-c46f-4af7-ba12-995c2d18dc6d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.80617ms Apr 28 13:06:32.250: INFO: Pod "pod-projected-configmaps-462fb9cd-c46f-4af7-ba12-995c2d18dc6d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008300706s Apr 28 13:06:34.255: INFO: Pod "pod-projected-configmaps-462fb9cd-c46f-4af7-ba12-995c2d18dc6d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012990962s STEP: Saw pod success Apr 28 13:06:34.255: INFO: Pod "pod-projected-configmaps-462fb9cd-c46f-4af7-ba12-995c2d18dc6d" satisfied condition "success or failure" Apr 28 13:06:34.259: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-462fb9cd-c46f-4af7-ba12-995c2d18dc6d container projected-configmap-volume-test: STEP: delete the pod Apr 28 13:06:34.290: INFO: Waiting for pod pod-projected-configmaps-462fb9cd-c46f-4af7-ba12-995c2d18dc6d to disappear Apr 28 13:06:34.314: INFO: Pod pod-projected-configmaps-462fb9cd-c46f-4af7-ba12-995c2d18dc6d no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 13:06:34.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6372" for this suite. Apr 28 13:06:40.328: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:06:40.420: INFO: namespace projected-6372 deletion completed in 6.102189355s • [SLOW TEST:10.287 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 13:06:40.420: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 28 13:06:40.505: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5316b1ca-4c22-4216-a757-dd550e86caf1" in namespace "downward-api-1298" to be "success or failure" Apr 28 13:06:40.508: INFO: Pod "downwardapi-volume-5316b1ca-4c22-4216-a757-dd550e86caf1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.962602ms Apr 28 13:06:42.513: INFO: Pod "downwardapi-volume-5316b1ca-4c22-4216-a757-dd550e86caf1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007232165s Apr 28 13:06:44.517: INFO: Pod "downwardapi-volume-5316b1ca-4c22-4216-a757-dd550e86caf1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011563286s STEP: Saw pod success Apr 28 13:06:44.517: INFO: Pod "downwardapi-volume-5316b1ca-4c22-4216-a757-dd550e86caf1" satisfied condition "success or failure" Apr 28 13:06:44.520: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-5316b1ca-4c22-4216-a757-dd550e86caf1 container client-container: STEP: delete the pod Apr 28 13:06:44.554: INFO: Waiting for pod downwardapi-volume-5316b1ca-4c22-4216-a757-dd550e86caf1 to disappear Apr 28 13:06:44.568: INFO: Pod downwardapi-volume-5316b1ca-4c22-4216-a757-dd550e86caf1 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 13:06:44.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1298" for this suite. Apr 28 13:06:50.588: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:06:50.716: INFO: namespace downward-api-1298 deletion completed in 6.144136082s • [SLOW TEST:10.296 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 13:06:50.716: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: validating cluster-info Apr 28 13:06:50.756: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' Apr 28 13:06:50.852: INFO: stderr: "" Apr 28 13:06:50.852: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32769\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32769/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 13:06:50.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7804" for this suite. Apr 28 13:06:56.871: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:06:56.950: INFO: namespace kubectl-7804 deletion completed in 6.094077789s • [SLOW TEST:6.234 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl cluster-info /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 13:06:56.950: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 13:07:27.520: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5092" for this suite. Apr 28 13:07:33.538: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:07:33.608: INFO: namespace container-runtime-5092 deletion completed in 6.085797509s • [SLOW TEST:36.658 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 13:07:33.609: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Apr 28 13:07:33.733: INFO: Pod name pod-release: Found 0 pods out of 1 Apr 28 13:07:38.738: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 13:07:39.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-4933" for this suite. Apr 28 13:07:45.802: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:07:45.874: INFO: namespace replication-controller-4933 deletion completed in 6.109304565s • [SLOW TEST:12.266 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 13:07:45.875: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Apr 28 13:07:45.903: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 28 13:07:45.909: INFO: Waiting for terminating namespaces to be deleted... Apr 28 13:07:45.912: INFO: Logging pods the kubelet thinks is on node iruya-worker before test Apr 28 13:07:45.916: INFO: kube-proxy-pmz4p from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Apr 28 13:07:45.916: INFO: Container kube-proxy ready: true, restart count 0 Apr 28 13:07:45.916: INFO: kindnet-gwz5g from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Apr 28 13:07:45.916: INFO: Container kindnet-cni ready: true, restart count 0 Apr 28 13:07:45.916: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test Apr 28 13:07:45.922: INFO: coredns-5d4dd4b4db-gm7vr from kube-system started at 2020-03-15 18:24:52 +0000 UTC (1 container statuses recorded) Apr 28 13:07:45.922: INFO: Container coredns ready: true, restart count 0 Apr 28 13:07:45.922: INFO: coredns-5d4dd4b4db-6jcgz from kube-system started at 2020-03-15 18:24:54 +0000 UTC (1 container statuses recorded) Apr 28 13:07:45.922: INFO: Container coredns ready: true, restart count 0 Apr 28 13:07:45.922: INFO: kube-proxy-vwbcj from kube-system started at 2020-03-15 18:24:42 +0000 UTC (1 container statuses recorded) Apr 28 13:07:45.922: INFO: Container kube-proxy ready: true, restart count 0 Apr 28 13:07:45.922: INFO: kindnet-mgd8b from kube-system started at 2020-03-15 18:24:43 +0000 UTC (1 container statuses recorded) Apr 28 13:07:45.922: INFO: Container kindnet-cni ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: verifying the node has the label node iruya-worker STEP: verifying the node has the label node iruya-worker2 Apr 28 13:07:46.005: INFO: Pod coredns-5d4dd4b4db-6jcgz requesting resource cpu=100m on Node iruya-worker2 Apr 28 13:07:46.005: INFO: Pod coredns-5d4dd4b4db-gm7vr requesting resource cpu=100m on Node iruya-worker2 Apr 28 13:07:46.005: INFO: Pod kindnet-gwz5g requesting resource cpu=100m on Node iruya-worker Apr 28 13:07:46.005: INFO: Pod kindnet-mgd8b requesting resource cpu=100m on Node iruya-worker2 Apr 28 13:07:46.005: INFO: Pod kube-proxy-pmz4p requesting resource cpu=0m on Node iruya-worker Apr 28 13:07:46.005: INFO: Pod kube-proxy-vwbcj requesting resource cpu=0m on Node iruya-worker2 STEP: Starting Pods to consume most of the cluster CPU. STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-1b16f10c-4534-4c61-95ac-2308535d8499.1609fdadae56df0f], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9001/filler-pod-1b16f10c-4534-4c61-95ac-2308535d8499 to iruya-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-1b16f10c-4534-4c61-95ac-2308535d8499.1609fdae06df77d0], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-1b16f10c-4534-4c61-95ac-2308535d8499.1609fdae487d7a8d], Reason = [Created], Message = [Created container filler-pod-1b16f10c-4534-4c61-95ac-2308535d8499] STEP: Considering event: Type = [Normal], Name = [filler-pod-1b16f10c-4534-4c61-95ac-2308535d8499.1609fdae5cea12f6], Reason = [Started], Message = [Started container filler-pod-1b16f10c-4534-4c61-95ac-2308535d8499] STEP: Considering event: Type = [Normal], Name = [filler-pod-3465553b-444f-4bc0-900e-e0260cf80cea.1609fdadafe17fbd], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9001/filler-pod-3465553b-444f-4bc0-900e-e0260cf80cea to iruya-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-3465553b-444f-4bc0-900e-e0260cf80cea.1609fdae3720fa7e], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-3465553b-444f-4bc0-900e-e0260cf80cea.1609fdae6b90ef54], Reason = [Created], Message = [Created container filler-pod-3465553b-444f-4bc0-900e-e0260cf80cea] STEP: Considering event: Type = [Normal], Name = [filler-pod-3465553b-444f-4bc0-900e-e0260cf80cea.1609fdae7d1b22b5], Reason = [Started], Message = [Started container filler-pod-3465553b-444f-4bc0-900e-e0260cf80cea] STEP: Considering event: Type = [Warning], Name = [additional-pod.1609fdae9f561229], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node iruya-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node iruya-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 13:07:51.122: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-9001" for this suite. Apr 28 13:07:57.137: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:07:57.212: INFO: namespace sched-pred-9001 deletion completed in 6.086588769s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:11.338 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 13:07:57.214: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap configmap-9337/configmap-test-77c04fed-53bb-4f61-8abe-4d01a233ce0f STEP: Creating a pod to test consume configMaps Apr 28 13:07:57.303: INFO: Waiting up to 5m0s for pod "pod-configmaps-ad7727e3-ed97-429b-960f-cc6e79c4b3b6" in namespace "configmap-9337" to be "success or failure" Apr 28 13:07:57.307: INFO: Pod "pod-configmaps-ad7727e3-ed97-429b-960f-cc6e79c4b3b6": Phase="Pending", Reason="", readiness=false. Elapsed: 3.731494ms Apr 28 13:07:59.311: INFO: Pod "pod-configmaps-ad7727e3-ed97-429b-960f-cc6e79c4b3b6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007617966s Apr 28 13:08:01.351: INFO: Pod "pod-configmaps-ad7727e3-ed97-429b-960f-cc6e79c4b3b6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.04816121s STEP: Saw pod success Apr 28 13:08:01.351: INFO: Pod "pod-configmaps-ad7727e3-ed97-429b-960f-cc6e79c4b3b6" satisfied condition "success or failure" Apr 28 13:08:01.354: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-ad7727e3-ed97-429b-960f-cc6e79c4b3b6 container env-test: STEP: delete the pod Apr 28 13:08:01.461: INFO: Waiting for pod pod-configmaps-ad7727e3-ed97-429b-960f-cc6e79c4b3b6 to disappear Apr 28 13:08:01.487: INFO: Pod pod-configmaps-ad7727e3-ed97-429b-960f-cc6e79c4b3b6 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 13:08:01.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9337" for this suite. Apr 28 13:08:07.502: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:08:07.573: INFO: namespace configmap-9337 deletion completed in 6.082706168s • [SLOW TEST:10.359 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 13:08:07.574: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override command Apr 28 13:08:07.652: INFO: Waiting up to 5m0s for pod "client-containers-9ea98f63-e533-48d5-9040-c2a90ff6e485" in namespace "containers-3527" to be "success or failure" Apr 28 13:08:07.655: INFO: Pod "client-containers-9ea98f63-e533-48d5-9040-c2a90ff6e485": Phase="Pending", Reason="", readiness=false. Elapsed: 3.577275ms Apr 28 13:08:09.659: INFO: Pod "client-containers-9ea98f63-e533-48d5-9040-c2a90ff6e485": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007610486s Apr 28 13:08:11.664: INFO: Pod "client-containers-9ea98f63-e533-48d5-9040-c2a90ff6e485": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012014325s STEP: Saw pod success Apr 28 13:08:11.664: INFO: Pod "client-containers-9ea98f63-e533-48d5-9040-c2a90ff6e485" satisfied condition "success or failure" Apr 28 13:08:11.667: INFO: Trying to get logs from node iruya-worker2 pod client-containers-9ea98f63-e533-48d5-9040-c2a90ff6e485 container test-container: STEP: delete the pod Apr 28 13:08:11.718: INFO: Waiting for pod client-containers-9ea98f63-e533-48d5-9040-c2a90ff6e485 to disappear Apr 28 13:08:11.727: INFO: Pod client-containers-9ea98f63-e533-48d5-9040-c2a90ff6e485 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 13:08:11.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-3527" for this suite. Apr 28 13:08:17.743: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:08:17.820: INFO: namespace containers-3527 deletion completed in 6.088673126s • [SLOW TEST:10.246 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 13:08:17.820: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Apr 28 13:08:17.929: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-1531,SelfLink:/api/v1/namespaces/watch-1531/configmaps/e2e-watch-test-label-changed,UID:89e9d121-b8b1-4162-8b0a-db4c1ea816e3,ResourceVersion:7894367,Generation:0,CreationTimestamp:2020-04-28 13:08:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Apr 28 13:08:17.929: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-1531,SelfLink:/api/v1/namespaces/watch-1531/configmaps/e2e-watch-test-label-changed,UID:89e9d121-b8b1-4162-8b0a-db4c1ea816e3,ResourceVersion:7894368,Generation:0,CreationTimestamp:2020-04-28 13:08:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Apr 28 13:08:17.930: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-1531,SelfLink:/api/v1/namespaces/watch-1531/configmaps/e2e-watch-test-label-changed,UID:89e9d121-b8b1-4162-8b0a-db4c1ea816e3,ResourceVersion:7894369,Generation:0,CreationTimestamp:2020-04-28 13:08:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Apr 28 13:08:27.975: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-1531,SelfLink:/api/v1/namespaces/watch-1531/configmaps/e2e-watch-test-label-changed,UID:89e9d121-b8b1-4162-8b0a-db4c1ea816e3,ResourceVersion:7894391,Generation:0,CreationTimestamp:2020-04-28 13:08:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Apr 28 13:08:27.975: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-1531,SelfLink:/api/v1/namespaces/watch-1531/configmaps/e2e-watch-test-label-changed,UID:89e9d121-b8b1-4162-8b0a-db4c1ea816e3,ResourceVersion:7894392,Generation:0,CreationTimestamp:2020-04-28 13:08:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} Apr 28 13:08:27.975: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-1531,SelfLink:/api/v1/namespaces/watch-1531/configmaps/e2e-watch-test-label-changed,UID:89e9d121-b8b1-4162-8b0a-db4c1ea816e3,ResourceVersion:7894393,Generation:0,CreationTimestamp:2020-04-28 13:08:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 13:08:27.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-1531" for this suite. Apr 28 13:08:34.036: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:08:34.115: INFO: namespace watch-1531 deletion completed in 6.098492799s • [SLOW TEST:16.294 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 13:08:34.116: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 13:08:39.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-8362" for this suite. Apr 28 13:09:01.354: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:09:01.428: INFO: namespace replication-controller-8362 deletion completed in 22.091707753s • [SLOW TEST:27.312 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 13:09:01.428: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 28 13:09:01.486: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1738' Apr 28 13:09:04.199: INFO: stderr: "" Apr 28 13:09:04.199: INFO: stdout: "replicationcontroller/redis-master created\n" Apr 28 13:09:04.199: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1738' Apr 28 13:09:04.510: INFO: stderr: "" Apr 28 13:09:04.510: INFO: stdout: "service/redis-master created\n" STEP: Waiting for Redis master to start. Apr 28 13:09:05.514: INFO: Selector matched 1 pods for map[app:redis] Apr 28 13:09:05.514: INFO: Found 0 / 1 Apr 28 13:09:06.515: INFO: Selector matched 1 pods for map[app:redis] Apr 28 13:09:06.515: INFO: Found 0 / 1 Apr 28 13:09:07.514: INFO: Selector matched 1 pods for map[app:redis] Apr 28 13:09:07.514: INFO: Found 1 / 1 Apr 28 13:09:07.514: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Apr 28 13:09:07.517: INFO: Selector matched 1 pods for map[app:redis] Apr 28 13:09:07.517: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Apr 28 13:09:07.518: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-xbx87 --namespace=kubectl-1738' Apr 28 13:09:07.627: INFO: stderr: "" Apr 28 13:09:07.627: INFO: stdout: "Name: redis-master-xbx87\nNamespace: kubectl-1738\nPriority: 0\nNode: iruya-worker2/172.17.0.5\nStart Time: Tue, 28 Apr 2020 13:09:04 +0000\nLabels: app=redis\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.1.203\nControlled By: ReplicationController/redis-master\nContainers:\n redis-master:\n Container ID: containerd://e0e62abed551cddbd4aa87738d0fc55ebe1148ec63aacd8a5d43e3602314c04a\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Image ID: gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Tue, 28 Apr 2020 13:09:06 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-74ttz (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-74ttz:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-74ttz\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 3s default-scheduler Successfully assigned kubectl-1738/redis-master-xbx87 to iruya-worker2\n Normal Pulled 2s kubelet, iruya-worker2 Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n Normal Created 1s kubelet, iruya-worker2 Created container redis-master\n Normal Started 1s kubelet, iruya-worker2 Started container redis-master\n" Apr 28 13:09:07.627: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=kubectl-1738' Apr 28 13:09:07.736: INFO: stderr: "" Apr 28 13:09:07.736: INFO: stdout: "Name: redis-master\nNamespace: kubectl-1738\nSelector: app=redis,role=master\nLabels: app=redis\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=redis\n role=master\n Containers:\n redis-master:\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 3s replication-controller Created pod: redis-master-xbx87\n" Apr 28 13:09:07.737: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=kubectl-1738' Apr 28 13:09:07.834: INFO: stderr: "" Apr 28 13:09:07.834: INFO: stdout: "Name: redis-master\nNamespace: kubectl-1738\nLabels: app=redis\n role=master\nAnnotations: \nSelector: app=redis,role=master\nType: ClusterIP\nIP: 10.105.230.170\nPort: 6379/TCP\nTargetPort: redis-server/TCP\nEndpoints: 10.244.1.203:6379\nSession Affinity: None\nEvents: \n" Apr 28 13:09:07.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node iruya-control-plane' Apr 28 13:09:07.969: INFO: stderr: "" Apr 28 13:09:07.969: INFO: stdout: "Name: iruya-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=iruya-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 15 Mar 2020 18:24:20 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Tue, 28 Apr 2020 13:08:42 +0000 Sun, 15 Mar 2020 18:24:20 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Tue, 28 Apr 2020 13:08:42 +0000 Sun, 15 Mar 2020 18:24:20 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Tue, 28 Apr 2020 13:08:42 +0000 Sun, 15 Mar 2020 18:24:20 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Tue, 28 Apr 2020 13:08:42 +0000 Sun, 15 Mar 2020 18:25:00 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.7\n Hostname: iruya-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 09f14f6f4d1640fcaab2243401c9f154\n System UUID: 7c6ca533-492e-400c-b058-c282f97a69ec\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2\n Kubelet Version: v1.15.7\n Kube-Proxy Version: v1.15.7\nPodCIDR: 10.244.0.0/24\nNon-terminated Pods: (7 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system etcd-iruya-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 43d\n kube-system kindnet-zn8sx 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 43d\n kube-system kube-apiserver-iruya-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 43d\n kube-system kube-controller-manager-iruya-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 43d\n kube-system kube-proxy-46nsr 0 (0%) 0 (0%) 0 (0%) 0 (0%) 43d\n kube-system kube-scheduler-iruya-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 43d\n local-path-storage local-path-provisioner-d4947b89c-72frh 0 (0%) 0 (0%) 0 (0%) 0 (0%) 43d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 650m (4%) 100m (0%)\n memory 50Mi (0%) 50Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" Apr 28 13:09:07.970: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-1738' Apr 28 13:09:08.070: INFO: stderr: "" Apr 28 13:09:08.070: INFO: stdout: "Name: kubectl-1738\nLabels: e2e-framework=kubectl\n e2e-run=a859a4f3-c485-4786-8b9e-28db8dedbdc9\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo resource limits.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 13:09:08.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1738" for this suite. Apr 28 13:09:30.103: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:09:30.182: INFO: namespace kubectl-1738 deletion completed in 22.108140189s • [SLOW TEST:28.754 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 13:09:30.182: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 28 13:09:30.300: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Apr 28 13:09:30.307: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 13:09:30.358: INFO: Number of nodes with available pods: 0 Apr 28 13:09:30.358: INFO: Node iruya-worker is running more than one daemon pod Apr 28 13:09:31.361: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 13:09:31.364: INFO: Number of nodes with available pods: 0 Apr 28 13:09:31.364: INFO: Node iruya-worker is running more than one daemon pod Apr 28 13:09:32.363: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 13:09:32.366: INFO: Number of nodes with available pods: 0 Apr 28 13:09:32.366: INFO: Node iruya-worker is running more than one daemon pod Apr 28 13:09:33.363: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 13:09:33.366: INFO: Number of nodes with available pods: 0 Apr 28 13:09:33.366: INFO: Node iruya-worker is running more than one daemon pod Apr 28 13:09:34.364: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 13:09:34.368: INFO: Number of nodes with available pods: 2 Apr 28 13:09:34.368: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Apr 28 13:09:34.437: INFO: Wrong image for pod: daemon-set-gfznh. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 28 13:09:34.437: INFO: Wrong image for pod: daemon-set-gkx5k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 28 13:09:34.491: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 13:09:35.495: INFO: Wrong image for pod: daemon-set-gfznh. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 28 13:09:35.495: INFO: Wrong image for pod: daemon-set-gkx5k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 28 13:09:35.498: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 13:09:36.496: INFO: Wrong image for pod: daemon-set-gfznh. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 28 13:09:36.496: INFO: Wrong image for pod: daemon-set-gkx5k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 28 13:09:36.500: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 13:09:37.495: INFO: Wrong image for pod: daemon-set-gfznh. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 28 13:09:37.495: INFO: Wrong image for pod: daemon-set-gkx5k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 28 13:09:37.495: INFO: Pod daemon-set-gkx5k is not available Apr 28 13:09:37.497: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 13:09:38.496: INFO: Wrong image for pod: daemon-set-gfznh. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 28 13:09:38.496: INFO: Pod daemon-set-vzntc is not available Apr 28 13:09:38.500: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 13:09:39.538: INFO: Wrong image for pod: daemon-set-gfznh. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 28 13:09:39.538: INFO: Pod daemon-set-vzntc is not available Apr 28 13:09:39.542: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 13:09:40.496: INFO: Wrong image for pod: daemon-set-gfznh. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 28 13:09:40.496: INFO: Pod daemon-set-vzntc is not available Apr 28 13:09:40.500: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 13:09:41.496: INFO: Wrong image for pod: daemon-set-gfznh. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 28 13:09:41.500: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 13:09:42.550: INFO: Wrong image for pod: daemon-set-gfznh. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 28 13:09:42.554: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 13:09:43.496: INFO: Wrong image for pod: daemon-set-gfznh. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 28 13:09:43.496: INFO: Pod daemon-set-gfznh is not available Apr 28 13:09:43.501: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 13:09:44.496: INFO: Wrong image for pod: daemon-set-gfznh. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 28 13:09:44.496: INFO: Pod daemon-set-gfznh is not available Apr 28 13:09:44.501: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 13:09:45.496: INFO: Wrong image for pod: daemon-set-gfznh. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 28 13:09:45.496: INFO: Pod daemon-set-gfznh is not available Apr 28 13:09:45.500: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 13:09:46.496: INFO: Wrong image for pod: daemon-set-gfznh. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 28 13:09:46.496: INFO: Pod daemon-set-gfznh is not available Apr 28 13:09:46.500: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 13:09:47.496: INFO: Wrong image for pod: daemon-set-gfznh. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 28 13:09:47.496: INFO: Pod daemon-set-gfznh is not available Apr 28 13:09:47.499: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 13:09:48.496: INFO: Wrong image for pod: daemon-set-gfznh. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 28 13:09:48.496: INFO: Pod daemon-set-gfznh is not available Apr 28 13:09:48.500: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 13:09:49.496: INFO: Wrong image for pod: daemon-set-gfznh. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 28 13:09:49.496: INFO: Pod daemon-set-gfznh is not available Apr 28 13:09:49.500: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 13:09:50.496: INFO: Wrong image for pod: daemon-set-gfznh. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 28 13:09:50.496: INFO: Pod daemon-set-gfznh is not available Apr 28 13:09:50.500: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 13:09:51.496: INFO: Wrong image for pod: daemon-set-gfznh. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 28 13:09:51.496: INFO: Pod daemon-set-gfznh is not available Apr 28 13:09:51.500: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 13:09:52.496: INFO: Pod daemon-set-8mwc2 is not available Apr 28 13:09:52.501: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Apr 28 13:09:52.504: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 13:09:52.507: INFO: Number of nodes with available pods: 1 Apr 28 13:09:52.507: INFO: Node iruya-worker is running more than one daemon pod Apr 28 13:09:53.515: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 13:09:53.518: INFO: Number of nodes with available pods: 1 Apr 28 13:09:53.518: INFO: Node iruya-worker is running more than one daemon pod Apr 28 13:09:54.512: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 13:09:54.515: INFO: Number of nodes with available pods: 1 Apr 28 13:09:54.515: INFO: Node iruya-worker is running more than one daemon pod Apr 28 13:09:55.512: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 13:09:55.515: INFO: Number of nodes with available pods: 2 Apr 28 13:09:55.515: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8108, will wait for the garbage collector to delete the pods Apr 28 13:09:55.588: INFO: Deleting DaemonSet.extensions daemon-set took: 6.148006ms Apr 28 13:09:55.888: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.235211ms Apr 28 13:10:02.197: INFO: Number of nodes with available pods: 0 Apr 28 13:10:02.197: INFO: Number of running nodes: 0, number of available pods: 0 Apr 28 13:10:02.200: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8108/daemonsets","resourceVersion":"7894728"},"items":null} Apr 28 13:10:02.203: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8108/pods","resourceVersion":"7894728"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 13:10:02.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-8108" for this suite. Apr 28 13:10:08.249: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:10:08.327: INFO: namespace daemonsets-8108 deletion completed in 6.111482549s • [SLOW TEST:38.144 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 13:10:08.327: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on tmpfs Apr 28 13:10:08.414: INFO: Waiting up to 5m0s for pod "pod-993f2086-f657-4fb8-a60d-99391d085db5" in namespace "emptydir-4541" to be "success or failure" Apr 28 13:10:08.443: INFO: Pod "pod-993f2086-f657-4fb8-a60d-99391d085db5": Phase="Pending", Reason="", readiness=false. Elapsed: 28.604837ms Apr 28 13:10:10.447: INFO: Pod "pod-993f2086-f657-4fb8-a60d-99391d085db5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033007953s Apr 28 13:10:12.452: INFO: Pod "pod-993f2086-f657-4fb8-a60d-99391d085db5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037431273s STEP: Saw pod success Apr 28 13:10:12.452: INFO: Pod "pod-993f2086-f657-4fb8-a60d-99391d085db5" satisfied condition "success or failure" Apr 28 13:10:12.455: INFO: Trying to get logs from node iruya-worker pod pod-993f2086-f657-4fb8-a60d-99391d085db5 container test-container: STEP: delete the pod Apr 28 13:10:12.522: INFO: Waiting for pod pod-993f2086-f657-4fb8-a60d-99391d085db5 to disappear Apr 28 13:10:12.538: INFO: Pod pod-993f2086-f657-4fb8-a60d-99391d085db5 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 13:10:12.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4541" for this suite. Apr 28 13:10:18.571: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:10:18.650: INFO: namespace emptydir-4541 deletion completed in 6.108662844s • [SLOW TEST:10.323 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 13:10:18.650: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override all Apr 28 13:10:18.738: INFO: Waiting up to 5m0s for pod "client-containers-af987a81-35bf-443e-b0ad-b07747425a80" in namespace "containers-9371" to be "success or failure" Apr 28 13:10:18.754: INFO: Pod "client-containers-af987a81-35bf-443e-b0ad-b07747425a80": Phase="Pending", Reason="", readiness=false. Elapsed: 15.881523ms Apr 28 13:10:20.758: INFO: Pod "client-containers-af987a81-35bf-443e-b0ad-b07747425a80": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019565948s Apr 28 13:10:22.762: INFO: Pod "client-containers-af987a81-35bf-443e-b0ad-b07747425a80": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023434764s STEP: Saw pod success Apr 28 13:10:22.762: INFO: Pod "client-containers-af987a81-35bf-443e-b0ad-b07747425a80" satisfied condition "success or failure" Apr 28 13:10:22.784: INFO: Trying to get logs from node iruya-worker2 pod client-containers-af987a81-35bf-443e-b0ad-b07747425a80 container test-container: STEP: delete the pod Apr 28 13:10:22.809: INFO: Waiting for pod client-containers-af987a81-35bf-443e-b0ad-b07747425a80 to disappear Apr 28 13:10:22.814: INFO: Pod client-containers-af987a81-35bf-443e-b0ad-b07747425a80 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 13:10:22.814: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-9371" for this suite. Apr 28 13:10:28.829: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:10:28.910: INFO: namespace containers-9371 deletion completed in 6.09312377s • [SLOW TEST:10.260 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 13:10:28.910: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-7fc41c9d-6d98-4e8c-a6ed-4f44b4edbf83 STEP: Creating a pod to test consume secrets Apr 28 13:10:28.997: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-1fecdcdf-dbf2-4574-aac8-5cacca1f940d" in namespace "projected-5187" to be "success or failure" Apr 28 13:10:29.014: INFO: Pod "pod-projected-secrets-1fecdcdf-dbf2-4574-aac8-5cacca1f940d": Phase="Pending", Reason="", readiness=false. Elapsed: 17.725928ms Apr 28 13:10:31.083: INFO: Pod "pod-projected-secrets-1fecdcdf-dbf2-4574-aac8-5cacca1f940d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.086694187s Apr 28 13:10:33.087: INFO: Pod "pod-projected-secrets-1fecdcdf-dbf2-4574-aac8-5cacca1f940d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.090848248s STEP: Saw pod success Apr 28 13:10:33.088: INFO: Pod "pod-projected-secrets-1fecdcdf-dbf2-4574-aac8-5cacca1f940d" satisfied condition "success or failure" Apr 28 13:10:33.091: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-1fecdcdf-dbf2-4574-aac8-5cacca1f940d container projected-secret-volume-test: STEP: delete the pod Apr 28 13:10:33.134: INFO: Waiting for pod pod-projected-secrets-1fecdcdf-dbf2-4574-aac8-5cacca1f940d to disappear Apr 28 13:10:33.137: INFO: Pod pod-projected-secrets-1fecdcdf-dbf2-4574-aac8-5cacca1f940d no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 13:10:33.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5187" for this suite. Apr 28 13:10:39.167: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:10:39.236: INFO: namespace projected-5187 deletion completed in 6.096261324s • [SLOW TEST:10.326 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 13:10:39.236: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Apr 28 13:10:46.767: INFO: 0 pods remaining Apr 28 13:10:46.767: INFO: 0 pods has nil DeletionTimestamp Apr 28 13:10:46.767: INFO: Apr 28 13:10:47.001: INFO: 0 pods remaining Apr 28 13:10:47.001: INFO: 0 pods has nil DeletionTimestamp Apr 28 13:10:47.001: INFO: STEP: Gathering metrics W0428 13:10:48.224331 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 28 13:10:48.224: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 13:10:48.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3627" for this suite. Apr 28 13:10:56.322: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:10:56.400: INFO: namespace gc-3627 deletion completed in 8.165031079s • [SLOW TEST:17.164 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 13:10:56.401: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Starting the proxy Apr 28 13:10:56.577: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix610430278/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 13:10:56.642: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5138" for this suite. Apr 28 13:11:02.657: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:11:02.766: INFO: namespace kubectl-5138 deletion completed in 6.119738789s • [SLOW TEST:6.365 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 13:11:02.766: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 28 13:11:02.829: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Apr 28 13:11:04.867: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 13:11:05.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-7571" for this suite. Apr 28 13:11:12.058: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:11:12.138: INFO: namespace replication-controller-7571 deletion completed in 6.094486565s • [SLOW TEST:9.372 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 13:11:12.138: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 28 13:11:12.421: INFO: Creating deployment "nginx-deployment" Apr 28 13:11:12.427: INFO: Waiting for observed generation 1 Apr 28 13:11:14.497: INFO: Waiting for all required pods to come up Apr 28 13:11:14.502: INFO: Pod name nginx: Found 10 pods out of 10 STEP: ensuring each pod is running Apr 28 13:11:22.513: INFO: Waiting for deployment "nginx-deployment" to complete Apr 28 13:11:22.520: INFO: Updating deployment "nginx-deployment" with a non-existent image Apr 28 13:11:22.529: INFO: Updating deployment nginx-deployment Apr 28 13:11:22.530: INFO: Waiting for observed generation 2 Apr 28 13:11:24.546: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Apr 28 13:11:24.549: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Apr 28 13:11:24.552: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Apr 28 13:11:24.561: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Apr 28 13:11:24.561: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Apr 28 13:11:24.564: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Apr 28 13:11:24.569: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas Apr 28 13:11:24.569: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30 Apr 28 13:11:24.576: INFO: Updating deployment nginx-deployment Apr 28 13:11:24.576: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas Apr 28 13:11:24.630: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Apr 28 13:11:24.719: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Apr 28 13:11:24.927: INFO: Deployment "nginx-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:deployment-6422,SelfLink:/apis/apps/v1/namespaces/deployment-6422/deployments/nginx-deployment,UID:1fc0d8b2-ab47-4052-91b5-ff5a2ccda6e8,ResourceVersion:7895407,Generation:3,CreationTimestamp:2020-04-28 13:11:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Progressing True 2020-04-28 13:11:22 +0000 UTC 2020-04-28 13:11:12 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-55fb7cb77f" is progressing.} {Available False 2020-04-28 13:11:24 +0000 UTC 2020-04-28 13:11:24 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},} Apr 28 13:11:24.997: INFO: New ReplicaSet "nginx-deployment-55fb7cb77f" of Deployment "nginx-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f,GenerateName:,Namespace:deployment-6422,SelfLink:/apis/apps/v1/namespaces/deployment-6422/replicasets/nginx-deployment-55fb7cb77f,UID:f9355031-b34e-4032-b3f8-5cacd9048825,ResourceVersion:7895444,Generation:3,CreationTimestamp:2020-04-28 13:11:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 1fc0d8b2-ab47-4052-91b5-ff5a2ccda6e8 0xc0031ffd77 0xc0031ffd78}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Apr 28 13:11:24.997: INFO: All old ReplicaSets of Deployment "nginx-deployment": Apr 28 13:11:24.997: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498,GenerateName:,Namespace:deployment-6422,SelfLink:/apis/apps/v1/namespaces/deployment-6422/replicasets/nginx-deployment-7b8c6f4498,UID:084530ad-dba6-4d5a-8652-50d621ec1adf,ResourceVersion:7895427,Generation:3,CreationTimestamp:2020-04-28 13:11:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 1fc0d8b2-ab47-4052-91b5-ff5a2ccda6e8 0xc0031ffe47 0xc0031ffe48}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},} Apr 28 13:11:25.130: INFO: Pod "nginx-deployment-55fb7cb77f-24lp7" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-24lp7,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6422,SelfLink:/api/v1/namespaces/deployment-6422/pods/nginx-deployment-55fb7cb77f-24lp7,UID:1bad9fa5-c9a7-44a0-b9bc-80c02e2c2332,ResourceVersion:7895378,Generation:0,CreationTimestamp:2020-04-28 13:11:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f f9355031-b34e-4032-b3f8-5cacd9048825 0xc0031027c7 0xc0031027c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-qqphs {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qqphs,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-qqphs true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc003102840} {node.kubernetes.io/unreachable Exists NoExecute 0xc003102860}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 13:11:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 13:11:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 13:11:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 13:11:22 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-04-28 13:11:22 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 28 13:11:25.131: INFO: Pod "nginx-deployment-55fb7cb77f-2nktf" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-2nktf,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6422,SelfLink:/api/v1/namespaces/deployment-6422/pods/nginx-deployment-55fb7cb77f-2nktf,UID:e04daaeb-45e0-4a7c-9e2e-1e2f523a6839,ResourceVersion:7895437,Generation:0,CreationTimestamp:2020-04-28 13:11:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f f9355031-b34e-4032-b3f8-5cacd9048825 0xc003102930 0xc003102931}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-qqphs {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qqphs,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-qqphs true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0031029b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0031029d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 13:11:24 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 28 13:11:25.131: INFO: Pod "nginx-deployment-55fb7cb77f-2p27h" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-2p27h,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6422,SelfLink:/api/v1/namespaces/deployment-6422/pods/nginx-deployment-55fb7cb77f-2p27h,UID:e1c50615-6d04-49f1-b84f-1d11d132cd9d,ResourceVersion:7895357,Generation:0,CreationTimestamp:2020-04-28 13:11:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f f9355031-b34e-4032-b3f8-5cacd9048825 0xc003102a57 0xc003102a58}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-qqphs {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qqphs,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-qqphs true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc003102ad0} {node.kubernetes.io/unreachable Exists NoExecute 0xc003102af0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 13:11:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 13:11:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 13:11:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 13:11:22 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-04-28 13:11:22 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 28 13:11:25.131: INFO: Pod "nginx-deployment-55fb7cb77f-746vk" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-746vk,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6422,SelfLink:/api/v1/namespaces/deployment-6422/pods/nginx-deployment-55fb7cb77f-746vk,UID:33cbc192-d28e-47f1-a094-194c7a460e09,ResourceVersion:7895375,Generation:0,CreationTimestamp:2020-04-28 13:11:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f f9355031-b34e-4032-b3f8-5cacd9048825 0xc003102bc0 0xc003102bc1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-qqphs {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qqphs,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-qqphs true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc003102c40} {node.kubernetes.io/unreachable Exists NoExecute 0xc003102c60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 13:11:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 13:11:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 13:11:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 13:11:22 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-04-28 13:11:22 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 28 13:11:25.131: INFO: Pod "nginx-deployment-55fb7cb77f-9qm5s" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-9qm5s,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6422,SelfLink:/api/v1/namespaces/deployment-6422/pods/nginx-deployment-55fb7cb77f-9qm5s,UID:e01c41b5-70ef-42fb-9b2a-51dce9ee7764,ResourceVersion:7895434,Generation:0,CreationTimestamp:2020-04-28 13:11:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f f9355031-b34e-4032-b3f8-5cacd9048825 0xc003102d30 0xc003102d31}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-qqphs {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qqphs,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-qqphs true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc003102db0} {node.kubernetes.io/unreachable Exists NoExecute 0xc003102dd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 13:11:24 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 28 13:11:25.131: INFO: Pod "nginx-deployment-55fb7cb77f-f7t4t" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-f7t4t,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6422,SelfLink:/api/v1/namespaces/deployment-6422/pods/nginx-deployment-55fb7cb77f-f7t4t,UID:acd2961d-a3df-4c68-91c8-77fc224b77f6,ResourceVersion:7895432,Generation:0,CreationTimestamp:2020-04-28 13:11:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f f9355031-b34e-4032-b3f8-5cacd9048825 0xc003102e57 0xc003102e58}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-qqphs {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qqphs,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-qqphs true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc003102ed0} {node.kubernetes.io/unreachable Exists NoExecute 0xc003102ef0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 13:11:24 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 28 13:11:25.131: INFO: Pod "nginx-deployment-55fb7cb77f-hc7kz" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-hc7kz,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6422,SelfLink:/api/v1/namespaces/deployment-6422/pods/nginx-deployment-55fb7cb77f-hc7kz,UID:12fa3514-5d71-4489-a98c-0260811439f5,ResourceVersion:7895439,Generation:0,CreationTimestamp:2020-04-28 13:11:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f f9355031-b34e-4032-b3f8-5cacd9048825 0xc003102f77 0xc003102f78}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-qqphs {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qqphs,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-qqphs true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc003102ff0} {node.kubernetes.io/unreachable Exists NoExecute 0xc003103010}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 13:11:24 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 28 13:11:25.132: INFO: Pod "nginx-deployment-55fb7cb77f-lxxpc" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-lxxpc,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6422,SelfLink:/api/v1/namespaces/deployment-6422/pods/nginx-deployment-55fb7cb77f-lxxpc,UID:e38b7f09-0bec-4599-82ab-ae93371db176,ResourceVersion:7895380,Generation:0,CreationTimestamp:2020-04-28 13:11:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f f9355031-b34e-4032-b3f8-5cacd9048825 0xc003103097 0xc003103098}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-qqphs {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qqphs,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-qqphs true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc003103110} {node.kubernetes.io/unreachable Exists NoExecute 0xc003103130}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 13:11:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 13:11:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 13:11:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 13:11:22 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-04-28 13:11:22 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 28 13:11:25.132: INFO: Pod "nginx-deployment-55fb7cb77f-qhzcm" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-qhzcm,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6422,SelfLink:/api/v1/namespaces/deployment-6422/pods/nginx-deployment-55fb7cb77f-qhzcm,UID:d2d19986-8ea6-437b-8d12-2cf2b3a26806,ResourceVersion:7895355,Generation:0,CreationTimestamp:2020-04-28 13:11:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f f9355031-b34e-4032-b3f8-5cacd9048825 0xc003103200 0xc003103201}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-qqphs {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qqphs,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-qqphs true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc003103280} {node.kubernetes.io/unreachable Exists NoExecute 0xc0031032a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 13:11:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 13:11:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 13:11:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 13:11:22 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-04-28 13:11:22 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 28 13:11:25.132: INFO: Pod "nginx-deployment-55fb7cb77f-rd42x" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-rd42x,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6422,SelfLink:/api/v1/namespaces/deployment-6422/pods/nginx-deployment-55fb7cb77f-rd42x,UID:d36d686d-4fff-4b69-b093-fa6f2f08ed09,ResourceVersion:7895429,Generation:0,CreationTimestamp:2020-04-28 13:11:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f f9355031-b34e-4032-b3f8-5cacd9048825 0xc003103370 0xc003103371}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-qqphs {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qqphs,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-qqphs true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0031033f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc003103410}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 13:11:24 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 28 13:11:25.132: INFO: Pod "nginx-deployment-55fb7cb77f-t7nd6" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-t7nd6,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6422,SelfLink:/api/v1/namespaces/deployment-6422/pods/nginx-deployment-55fb7cb77f-t7nd6,UID:7a25a28f-bf2f-44ec-bec1-5550b588abad,ResourceVersion:7895419,Generation:0,CreationTimestamp:2020-04-28 13:11:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f f9355031-b34e-4032-b3f8-5cacd9048825 0xc003103497 0xc003103498}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-qqphs {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qqphs,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-qqphs true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc003103510} {node.kubernetes.io/unreachable Exists NoExecute 0xc003103530}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 13:11:24 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 28 13:11:25.132: INFO: Pod "nginx-deployment-55fb7cb77f-w7cf4" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-w7cf4,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6422,SelfLink:/api/v1/namespaces/deployment-6422/pods/nginx-deployment-55fb7cb77f-w7cf4,UID:b9980169-c814-4856-8edb-0263301c6161,ResourceVersion:7895421,Generation:0,CreationTimestamp:2020-04-28 13:11:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f f9355031-b34e-4032-b3f8-5cacd9048825 0xc0031035b7 0xc0031035b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-qqphs {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qqphs,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-qqphs true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc003103630} {node.kubernetes.io/unreachable Exists NoExecute 0xc003103650}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 13:11:24 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 28 13:11:25.132: INFO: Pod "nginx-deployment-55fb7cb77f-xm2kf" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-xm2kf,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6422,SelfLink:/api/v1/namespaces/deployment-6422/pods/nginx-deployment-55fb7cb77f-xm2kf,UID:86e5f0a4-f602-4964-a8fa-0f0c850c24c4,ResourceVersion:7895403,Generation:0,CreationTimestamp:2020-04-28 13:11:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f f9355031-b34e-4032-b3f8-5cacd9048825 0xc0031036d7 0xc0031036d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-qqphs {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qqphs,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-qqphs true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc003103750} {node.kubernetes.io/unreachable Exists NoExecute 0xc003103770}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 13:11:24 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 28 13:11:25.133: INFO: Pod "nginx-deployment-7b8c6f4498-246x7" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-246x7,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6422,SelfLink:/api/v1/namespaces/deployment-6422/pods/nginx-deployment-7b8c6f4498-246x7,UID:31fc6e28-a9b5-4817-b077-b06b105df41c,ResourceVersion:7895445,Generation:0,CreationTimestamp:2020-04-28 13:11:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 084530ad-dba6-4d5a-8652-50d621ec1adf 0xc0031037f7 0xc0031037f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-qqphs {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qqphs,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-qqphs true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc003103870} {node.kubernetes.io/unreachable Exists NoExecute 0xc003103890}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 13:11:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 13:11:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 13:11:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 13:11:24 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-04-28 13:11:24 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 28 13:11:25.133: INFO: Pod "nginx-deployment-7b8c6f4498-26mj6" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-26mj6,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6422,SelfLink:/api/v1/namespaces/deployment-6422/pods/nginx-deployment-7b8c6f4498-26mj6,UID:9d0d21c3-6ab9-4415-909a-66351cb9fc09,ResourceVersion:7895291,Generation:0,CreationTimestamp:2020-04-28 13:11:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 084530ad-dba6-4d5a-8652-50d621ec1adf 0xc003103957 0xc003103958}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-qqphs {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qqphs,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-qqphs true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0031039d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0031039f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 13:11:12 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 13:11:18 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 13:11:18 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 13:11:12 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.215,StartTime:2020-04-28 13:11:12 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-04-28 13:11:18 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://34e5bd0ec2c27da717bf18ffe2457a1e1a62374d68fc5c4d692196ac9fc41dfc}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 28 13:11:25.133: INFO: Pod "nginx-deployment-7b8c6f4498-42jcl" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-42jcl,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6422,SelfLink:/api/v1/namespaces/deployment-6422/pods/nginx-deployment-7b8c6f4498-42jcl,UID:966a1e2c-a9ee-49a0-aac6-96261cc4355c,ResourceVersion:7895302,Generation:0,CreationTimestamp:2020-04-28 13:11:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 084530ad-dba6-4d5a-8652-50d621ec1adf 0xc003103ac7 0xc003103ac8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-qqphs {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qqphs,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-qqphs true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc003103b40} {node.kubernetes.io/unreachable Exists NoExecute 0xc003103b60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 13:11:12 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 13:11:20 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 13:11:20 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 13:11:12 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.216,StartTime:2020-04-28 13:11:12 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-04-28 13:11:20 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://494bb576688bf5ba06d05d159b8b785289353f826d85692134a224f3b0c06654}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 28 13:11:25.133: INFO: Pod "nginx-deployment-7b8c6f4498-4hdls" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-4hdls,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6422,SelfLink:/api/v1/namespaces/deployment-6422/pods/nginx-deployment-7b8c6f4498-4hdls,UID:2527eec6-f5af-4514-bec5-1e6ba0e0de18,ResourceVersion:7895438,Generation:0,CreationTimestamp:2020-04-28 13:11:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 084530ad-dba6-4d5a-8652-50d621ec1adf 0xc003103c37 0xc003103c38}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-qqphs {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qqphs,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-qqphs true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc003103cb0} {node.kubernetes.io/unreachable Exists NoExecute 0xc003103cd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 13:11:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 13:11:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 13:11:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 13:11:24 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-04-28 13:11:24 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 28 13:11:25.133: INFO: Pod "nginx-deployment-7b8c6f4498-5qjvf" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-5qjvf,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6422,SelfLink:/api/v1/namespaces/deployment-6422/pods/nginx-deployment-7b8c6f4498-5qjvf,UID:dcac98b8-e381-4405-8f83-817d87196821,ResourceVersion:7895420,Generation:0,CreationTimestamp:2020-04-28 13:11:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 084530ad-dba6-4d5a-8652-50d621ec1adf 0xc003103d97 0xc003103d98}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-qqphs {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qqphs,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-qqphs true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc003103e10} {node.kubernetes.io/unreachable Exists NoExecute 0xc003103e30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 13:11:24 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 28 13:11:25.134: INFO: Pod "nginx-deployment-7b8c6f4498-7qtw2" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-7qtw2,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6422,SelfLink:/api/v1/namespaces/deployment-6422/pods/nginx-deployment-7b8c6f4498-7qtw2,UID:5ee22355-585e-49c1-80cc-6b7142d432ca,ResourceVersion:7895436,Generation:0,CreationTimestamp:2020-04-28 13:11:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 084530ad-dba6-4d5a-8652-50d621ec1adf 0xc003103eb7 0xc003103eb8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-qqphs {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qqphs,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-qqphs true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc003103f30} {node.kubernetes.io/unreachable Exists NoExecute 0xc003103f50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 13:11:24 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 28 13:11:25.134: INFO: Pod "nginx-deployment-7b8c6f4498-7qzzk" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-7qzzk,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6422,SelfLink:/api/v1/namespaces/deployment-6422/pods/nginx-deployment-7b8c6f4498-7qzzk,UID:68768c4e-5b6b-44a7-b1bd-6bc20e4e86d2,ResourceVersion:7895451,Generation:0,CreationTimestamp:2020-04-28 13:11:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 084530ad-dba6-4d5a-8652-50d621ec1adf 0xc003103fd7 0xc003103fd8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-qqphs {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qqphs,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-qqphs true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc003110050} {node.kubernetes.io/unreachable Exists NoExecute 0xc003110070}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 13:11:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 13:11:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 13:11:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 13:11:24 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-04-28 13:11:24 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 28 13:11:25.134: INFO: Pod "nginx-deployment-7b8c6f4498-8tlwx" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-8tlwx,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6422,SelfLink:/api/v1/namespaces/deployment-6422/pods/nginx-deployment-7b8c6f4498-8tlwx,UID:b999fba4-32d2-46ae-98b0-51f564318917,ResourceVersion:7895423,Generation:0,CreationTimestamp:2020-04-28 13:11:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 084530ad-dba6-4d5a-8652-50d621ec1adf 0xc003110137 0xc003110138}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-qqphs {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qqphs,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-qqphs true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0031101b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0031101d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 13:11:24 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 28 13:11:25.135: INFO: Pod "nginx-deployment-7b8c6f4498-dr5qm" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-dr5qm,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6422,SelfLink:/api/v1/namespaces/deployment-6422/pods/nginx-deployment-7b8c6f4498-dr5qm,UID:c3d87da1-cc41-4e4b-8845-9cc780e78a91,ResourceVersion:7895319,Generation:0,CreationTimestamp:2020-04-28 13:11:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 084530ad-dba6-4d5a-8652-50d621ec1adf 0xc003110257 0xc003110258}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-qqphs {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qqphs,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-qqphs true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0031102d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0031102f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 13:11:12 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 13:11:21 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 13:11:21 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 13:11:12 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.217,StartTime:2020-04-28 13:11:12 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-04-28 13:11:20 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://0cc29ed2e1dcf73c27ef28e1e41f8aa10832946b14163453ded87cbbd46d2f0d}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 28 13:11:25.135: INFO: Pod "nginx-deployment-7b8c6f4498-fzdwg" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-fzdwg,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6422,SelfLink:/api/v1/namespaces/deployment-6422/pods/nginx-deployment-7b8c6f4498-fzdwg,UID:b8645217-a7b3-4163-8959-ab7cbead92b5,ResourceVersion:7895430,Generation:0,CreationTimestamp:2020-04-28 13:11:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 084530ad-dba6-4d5a-8652-50d621ec1adf 0xc0031103c7 0xc0031103c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-qqphs {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qqphs,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-qqphs true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc003110440} {node.kubernetes.io/unreachable Exists NoExecute 0xc003110460}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 13:11:24 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 28 13:11:25.135: INFO: Pod "nginx-deployment-7b8c6f4498-htj95" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-htj95,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6422,SelfLink:/api/v1/namespaces/deployment-6422/pods/nginx-deployment-7b8c6f4498-htj95,UID:efc47d79-7082-408c-ab58-69e1369774de,ResourceVersion:7895274,Generation:0,CreationTimestamp:2020-04-28 13:11:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 084530ad-dba6-4d5a-8652-50d621ec1adf 0xc0031104e7 0xc0031104e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-qqphs {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qqphs,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-qqphs true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc003110560} {node.kubernetes.io/unreachable Exists NoExecute 0xc003110580}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 13:11:12 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 13:11:17 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 13:11:17 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 13:11:12 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.213,StartTime:2020-04-28 13:11:12 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-04-28 13:11:16 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://d7d1f4d6fdd19b21fa7f73b2f72af3a7815a86f498b334b16ffc2c19f0956be0}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 28 13:11:25.135: INFO: Pod "nginx-deployment-7b8c6f4498-jvfnp" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-jvfnp,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6422,SelfLink:/api/v1/namespaces/deployment-6422/pods/nginx-deployment-7b8c6f4498-jvfnp,UID:ccb0b3d4-d1a3-4068-8240-652b6dfb2e3f,ResourceVersion:7895312,Generation:0,CreationTimestamp:2020-04-28 13:11:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 084530ad-dba6-4d5a-8652-50d621ec1adf 0xc003110657 0xc003110658}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-qqphs {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qqphs,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-qqphs true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0031106d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0031106f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 13:11:12 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 13:11:21 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 13:11:21 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 13:11:12 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.10,StartTime:2020-04-28 13:11:12 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-04-28 13:11:21 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://82cba51d75bb58ca8bfdb4c16614eba5cdd160f87a39cde9eac66eb1fc2678c8}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 28 13:11:25.135: INFO: Pod "nginx-deployment-7b8c6f4498-l9256" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-l9256,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6422,SelfLink:/api/v1/namespaces/deployment-6422/pods/nginx-deployment-7b8c6f4498-l9256,UID:cb720890-1c0f-4fd6-84b0-e8f92d710cb4,ResourceVersion:7895433,Generation:0,CreationTimestamp:2020-04-28 13:11:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 084530ad-dba6-4d5a-8652-50d621ec1adf 0xc0031107c7 0xc0031107c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-qqphs {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qqphs,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-qqphs true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc003110840} {node.kubernetes.io/unreachable Exists NoExecute 0xc003110860}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 13:11:24 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 28 13:11:25.135: INFO: Pod "nginx-deployment-7b8c6f4498-q8g9d" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-q8g9d,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6422,SelfLink:/api/v1/namespaces/deployment-6422/pods/nginx-deployment-7b8c6f4498-q8g9d,UID:4f2e5151-1335-4672-9a0d-b754296397f1,ResourceVersion:7895288,Generation:0,CreationTimestamp:2020-04-28 13:11:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 084530ad-dba6-4d5a-8652-50d621ec1adf 0xc0031108e7 0xc0031108e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-qqphs {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qqphs,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-qqphs true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc003110960} {node.kubernetes.io/unreachable Exists NoExecute 0xc003110980}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 13:11:12 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 13:11:18 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 13:11:18 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 13:11:12 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.6,StartTime:2020-04-28 13:11:12 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-04-28 13:11:17 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://58abd45412b4cb4d5ab253aaf6abbb9115ce312347f9565a65dbdf20b949851f}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 28 13:11:25.136: INFO: Pod "nginx-deployment-7b8c6f4498-scgkm" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-scgkm,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6422,SelfLink:/api/v1/namespaces/deployment-6422/pods/nginx-deployment-7b8c6f4498-scgkm,UID:beaf5bba-e383-4539-bd6a-2949baef893e,ResourceVersion:7895318,Generation:0,CreationTimestamp:2020-04-28 13:11:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 084530ad-dba6-4d5a-8652-50d621ec1adf 0xc003110a57 0xc003110a58}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-qqphs {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qqphs,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-qqphs true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc003110ad0} {node.kubernetes.io/unreachable Exists NoExecute 0xc003110af0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 13:11:12 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 13:11:21 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 13:11:21 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 13:11:12 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.7,StartTime:2020-04-28 13:11:12 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-04-28 13:11:20 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://30b193abf3b4757e60b6d6889bc6b12f25215896bcc572fd42dcfd6509e1ff3b}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 28 13:11:25.136: INFO: Pod "nginx-deployment-7b8c6f4498-vn8sm" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-vn8sm,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6422,SelfLink:/api/v1/namespaces/deployment-6422/pods/nginx-deployment-7b8c6f4498-vn8sm,UID:d8a3cefa-797a-4ed3-ac11-338ea0bfb4c8,ResourceVersion:7895422,Generation:0,CreationTimestamp:2020-04-28 13:11:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 084530ad-dba6-4d5a-8652-50d621ec1adf 0xc003110bc7 0xc003110bc8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-qqphs {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qqphs,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-qqphs true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc003110c40} {node.kubernetes.io/unreachable Exists NoExecute 0xc003110c60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 13:11:24 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 28 13:11:25.136: INFO: Pod "nginx-deployment-7b8c6f4498-vvphw" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-vvphw,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6422,SelfLink:/api/v1/namespaces/deployment-6422/pods/nginx-deployment-7b8c6f4498-vvphw,UID:5601460b-88db-41ca-b91f-017923063826,ResourceVersion:7895435,Generation:0,CreationTimestamp:2020-04-28 13:11:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 084530ad-dba6-4d5a-8652-50d621ec1adf 0xc003110ce7 0xc003110ce8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-qqphs {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qqphs,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-qqphs true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc003110d60} {node.kubernetes.io/unreachable Exists NoExecute 0xc003110d80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 13:11:24 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 28 13:11:25.137: INFO: Pod "nginx-deployment-7b8c6f4498-wjxkv" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-wjxkv,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6422,SelfLink:/api/v1/namespaces/deployment-6422/pods/nginx-deployment-7b8c6f4498-wjxkv,UID:50df2801-4e4f-4cb6-897d-d4b1a08d25fd,ResourceVersion:7895431,Generation:0,CreationTimestamp:2020-04-28 13:11:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 084530ad-dba6-4d5a-8652-50d621ec1adf 0xc003110e07 0xc003110e08}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-qqphs {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qqphs,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-qqphs true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc003110e80} {node.kubernetes.io/unreachable Exists NoExecute 0xc003110ea0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 13:11:24 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 28 13:11:25.137: INFO: Pod "nginx-deployment-7b8c6f4498-z5nxm" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-z5nxm,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6422,SelfLink:/api/v1/namespaces/deployment-6422/pods/nginx-deployment-7b8c6f4498-z5nxm,UID:d08beca6-6b60-4803-b00e-97ad9e2066f2,ResourceVersion:7895287,Generation:0,CreationTimestamp:2020-04-28 13:11:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 084530ad-dba6-4d5a-8652-50d621ec1adf 0xc003110f27 0xc003110f28}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-qqphs {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qqphs,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-qqphs true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc003110fa0} {node.kubernetes.io/unreachable Exists NoExecute 0xc003110fc0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 13:11:12 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 13:11:18 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 13:11:18 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 13:11:12 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.214,StartTime:2020-04-28 13:11:12 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-04-28 13:11:18 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://c865f92603648931506ba9669455d375652812a49926972f91148ee2bc5de0a5}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 28 13:11:25.137: INFO: Pod "nginx-deployment-7b8c6f4498-zc888" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-zc888,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6422,SelfLink:/api/v1/namespaces/deployment-6422/pods/nginx-deployment-7b8c6f4498-zc888,UID:77b55758-dee3-4d6f-ba6c-8431c478a5a3,ResourceVersion:7895425,Generation:0,CreationTimestamp:2020-04-28 13:11:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 084530ad-dba6-4d5a-8652-50d621ec1adf 0xc003111097 0xc003111098}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-qqphs {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qqphs,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-qqphs true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc003111110} {node.kubernetes.io/unreachable Exists NoExecute 0xc003111130}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 13:11:24 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 13:11:25.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6422" for this suite. Apr 28 13:11:43.560: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:11:43.621: INFO: namespace deployment-6422 deletion completed in 18.333313667s • [SLOW TEST:31.483 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 13:11:43.621: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-upd-4a3ba404-c995-4556-93ba-552656fb9f1d STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 13:11:52.465: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7654" for this suite. Apr 28 13:12:16.493: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:12:16.599: INFO: namespace configmap-7654 deletion completed in 24.132297387s • [SLOW TEST:32.978 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 13:12:16.600: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-bb47b685-1ab7-438c-91d3-60991622dc23 STEP: Creating a pod to test consume configMaps Apr 28 13:12:16.689: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ff529361-161d-482f-be04-4757b8d5c493" in namespace "projected-777" to be "success or failure" Apr 28 13:12:16.698: INFO: Pod "pod-projected-configmaps-ff529361-161d-482f-be04-4757b8d5c493": Phase="Pending", Reason="", readiness=false. Elapsed: 8.852736ms Apr 28 13:12:18.703: INFO: Pod "pod-projected-configmaps-ff529361-161d-482f-be04-4757b8d5c493": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013144179s Apr 28 13:12:20.707: INFO: Pod "pod-projected-configmaps-ff529361-161d-482f-be04-4757b8d5c493": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017497555s STEP: Saw pod success Apr 28 13:12:20.707: INFO: Pod "pod-projected-configmaps-ff529361-161d-482f-be04-4757b8d5c493" satisfied condition "success or failure" Apr 28 13:12:20.710: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-ff529361-161d-482f-be04-4757b8d5c493 container projected-configmap-volume-test: STEP: delete the pod Apr 28 13:12:20.729: INFO: Waiting for pod pod-projected-configmaps-ff529361-161d-482f-be04-4757b8d5c493 to disappear Apr 28 13:12:20.734: INFO: Pod pod-projected-configmaps-ff529361-161d-482f-be04-4757b8d5c493 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 13:12:20.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-777" for this suite. Apr 28 13:12:26.749: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:12:26.836: INFO: namespace projected-777 deletion completed in 6.098566625s • [SLOW TEST:10.236 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 13:12:26.836: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on tmpfs Apr 28 13:12:26.910: INFO: Waiting up to 5m0s for pod "pod-e013b13f-f82f-4c79-b16d-f8b49efcf5d6" in namespace "emptydir-45" to be "success or failure" Apr 28 13:12:26.927: INFO: Pod "pod-e013b13f-f82f-4c79-b16d-f8b49efcf5d6": Phase="Pending", Reason="", readiness=false. Elapsed: 16.465258ms Apr 28 13:12:28.931: INFO: Pod "pod-e013b13f-f82f-4c79-b16d-f8b49efcf5d6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02021784s Apr 28 13:12:30.934: INFO: Pod "pod-e013b13f-f82f-4c79-b16d-f8b49efcf5d6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024055007s STEP: Saw pod success Apr 28 13:12:30.935: INFO: Pod "pod-e013b13f-f82f-4c79-b16d-f8b49efcf5d6" satisfied condition "success or failure" Apr 28 13:12:30.937: INFO: Trying to get logs from node iruya-worker pod pod-e013b13f-f82f-4c79-b16d-f8b49efcf5d6 container test-container: STEP: delete the pod Apr 28 13:12:30.978: INFO: Waiting for pod pod-e013b13f-f82f-4c79-b16d-f8b49efcf5d6 to disappear Apr 28 13:12:31.004: INFO: Pod pod-e013b13f-f82f-4c79-b16d-f8b49efcf5d6 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 13:12:31.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-45" for this suite. Apr 28 13:12:37.032: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:12:37.138: INFO: namespace emptydir-45 deletion completed in 6.129249911s • [SLOW TEST:10.302 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 13:12:37.139: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 28 13:12:37.267: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4dcf7398-6b51-4515-b068-8eab8ba1caf9" in namespace "projected-245" to be "success or failure" Apr 28 13:12:37.280: INFO: Pod "downwardapi-volume-4dcf7398-6b51-4515-b068-8eab8ba1caf9": Phase="Pending", Reason="", readiness=false. Elapsed: 12.798483ms Apr 28 13:12:39.284: INFO: Pod "downwardapi-volume-4dcf7398-6b51-4515-b068-8eab8ba1caf9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016796149s Apr 28 13:12:41.288: INFO: Pod "downwardapi-volume-4dcf7398-6b51-4515-b068-8eab8ba1caf9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020747689s STEP: Saw pod success Apr 28 13:12:41.288: INFO: Pod "downwardapi-volume-4dcf7398-6b51-4515-b068-8eab8ba1caf9" satisfied condition "success or failure" Apr 28 13:12:41.290: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-4dcf7398-6b51-4515-b068-8eab8ba1caf9 container client-container: STEP: delete the pod Apr 28 13:12:41.368: INFO: Waiting for pod downwardapi-volume-4dcf7398-6b51-4515-b068-8eab8ba1caf9 to disappear Apr 28 13:12:41.382: INFO: Pod downwardapi-volume-4dcf7398-6b51-4515-b068-8eab8ba1caf9 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 13:12:41.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-245" for this suite. Apr 28 13:12:47.398: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:12:47.466: INFO: namespace projected-245 deletion completed in 6.080244095s • [SLOW TEST:10.327 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 13:12:47.466: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-36d31edf-c2a8-463a-b8ca-d2089afbfa0e STEP: Creating a pod to test consume configMaps Apr 28 13:12:47.558: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-2c9738b6-50f3-4ece-b407-9d01b9def503" in namespace "projected-2488" to be "success or failure" Apr 28 13:12:47.562: INFO: Pod "pod-projected-configmaps-2c9738b6-50f3-4ece-b407-9d01b9def503": Phase="Pending", Reason="", readiness=false. Elapsed: 3.64996ms Apr 28 13:12:49.566: INFO: Pod "pod-projected-configmaps-2c9738b6-50f3-4ece-b407-9d01b9def503": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007808578s Apr 28 13:12:51.570: INFO: Pod "pod-projected-configmaps-2c9738b6-50f3-4ece-b407-9d01b9def503": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012166947s STEP: Saw pod success Apr 28 13:12:51.570: INFO: Pod "pod-projected-configmaps-2c9738b6-50f3-4ece-b407-9d01b9def503" satisfied condition "success or failure" Apr 28 13:12:51.573: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-2c9738b6-50f3-4ece-b407-9d01b9def503 container projected-configmap-volume-test: STEP: delete the pod Apr 28 13:12:51.593: INFO: Waiting for pod pod-projected-configmaps-2c9738b6-50f3-4ece-b407-9d01b9def503 to disappear Apr 28 13:12:51.624: INFO: Pod pod-projected-configmaps-2c9738b6-50f3-4ece-b407-9d01b9def503 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 13:12:51.624: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2488" for this suite. Apr 28 13:12:57.661: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:12:57.738: INFO: namespace projected-2488 deletion completed in 6.110251315s • [SLOW TEST:10.272 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 13:12:57.738: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1721 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Apr 28 13:12:57.768: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=kubectl-5753' Apr 28 13:12:57.871: INFO: stderr: "" Apr 28 13:12:57.871: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod is running STEP: verifying the pod e2e-test-nginx-pod was created Apr 28 13:13:02.921: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=kubectl-5753 -o json' Apr 28 13:13:03.020: INFO: stderr: "" Apr 28 13:13:03.020: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-04-28T13:12:57Z\",\n \"labels\": {\n \"run\": \"e2e-test-nginx-pod\"\n },\n \"name\": \"e2e-test-nginx-pod\",\n \"namespace\": \"kubectl-5753\",\n \"resourceVersion\": \"7896019\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-5753/pods/e2e-test-nginx-pod\",\n \"uid\": \"d6445a7c-a12a-4add-aca3-d1c45efb1ebd\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-nginx-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-tpppn\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"iruya-worker2\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-tpppn\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-tpppn\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-28T13:12:57Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-28T13:13:01Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-28T13:13:01Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-28T13:12:57Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://97b2c09c886b46b5edc5917af7b915d225db78cc244a89c4566478a155f97cd8\",\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imageID\": \"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n \"lastState\": {},\n \"name\": \"e2e-test-nginx-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-04-28T13:13:00Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.5\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.1.232\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-04-28T13:12:57Z\"\n }\n}\n" STEP: replace the image in the pod Apr 28 13:13:03.020: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-5753' Apr 28 13:13:03.290: INFO: stderr: "" Apr 28 13:13:03.290: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n" STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29 [AfterEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1726 Apr 28 13:13:03.293: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-5753' Apr 28 13:13:06.314: INFO: stderr: "" Apr 28 13:13:06.314: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 13:13:06.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5753" for this suite. Apr 28 13:13:12.326: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:13:12.406: INFO: namespace kubectl-5753 deletion completed in 6.088336952s • [SLOW TEST:14.668 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 13:13:12.406: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 28 13:13:30.486: INFO: Container started at 2020-04-28 13:13:14 +0000 UTC, pod became ready at 2020-04-28 13:13:29 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 13:13:30.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1225" for this suite. Apr 28 13:13:52.526: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:13:52.607: INFO: namespace container-probe-1225 deletion completed in 22.117191401s • [SLOW TEST:40.201 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 13:13:52.608: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 28 13:13:52.671: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0868f1ff-0661-4152-92ce-dba7d24b1d77" in namespace "downward-api-6099" to be "success or failure" Apr 28 13:13:52.713: INFO: Pod "downwardapi-volume-0868f1ff-0661-4152-92ce-dba7d24b1d77": Phase="Pending", Reason="", readiness=false. Elapsed: 41.545839ms Apr 28 13:13:54.717: INFO: Pod "downwardapi-volume-0868f1ff-0661-4152-92ce-dba7d24b1d77": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045846109s Apr 28 13:13:56.722: INFO: Pod "downwardapi-volume-0868f1ff-0661-4152-92ce-dba7d24b1d77": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.050455778s STEP: Saw pod success Apr 28 13:13:56.722: INFO: Pod "downwardapi-volume-0868f1ff-0661-4152-92ce-dba7d24b1d77" satisfied condition "success or failure" Apr 28 13:13:56.725: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-0868f1ff-0661-4152-92ce-dba7d24b1d77 container client-container: STEP: delete the pod Apr 28 13:13:56.751: INFO: Waiting for pod downwardapi-volume-0868f1ff-0661-4152-92ce-dba7d24b1d77 to disappear Apr 28 13:13:56.754: INFO: Pod downwardapi-volume-0868f1ff-0661-4152-92ce-dba7d24b1d77 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 13:13:56.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6099" for this suite. Apr 28 13:14:02.798: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:14:02.868: INFO: namespace downward-api-6099 deletion completed in 6.11058477s • [SLOW TEST:10.260 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 13:14:02.868: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 28 13:14:02.954: INFO: (0) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 6.925658ms) Apr 28 13:14:02.958: INFO: (1) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 4.413747ms) Apr 28 13:14:02.962: INFO: (2) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.493524ms) Apr 28 13:14:02.965: INFO: (3) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.091027ms) Apr 28 13:14:02.968: INFO: (4) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.828099ms) Apr 28 13:14:02.970: INFO: (5) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.590592ms) Apr 28 13:14:02.973: INFO: (6) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.278095ms) Apr 28 13:14:02.975: INFO: (7) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.30855ms) Apr 28 13:14:02.978: INFO: (8) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.765668ms) Apr 28 13:14:02.981: INFO: (9) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.669128ms) Apr 28 13:14:02.983: INFO: (10) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.839077ms) Apr 28 13:14:02.986: INFO: (11) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.672345ms) Apr 28 13:14:02.989: INFO: (12) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.934494ms) Apr 28 13:14:02.992: INFO: (13) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.770445ms) Apr 28 13:14:02.995: INFO: (14) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.745729ms) Apr 28 13:14:02.997: INFO: (15) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.837697ms) Apr 28 13:14:03.001: INFO: (16) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.223433ms) Apr 28 13:14:03.004: INFO: (17) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.878934ms) Apr 28 13:14:03.006: INFO: (18) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.77645ms) Apr 28 13:14:03.009: INFO: (19) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.793767ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 13:14:03.009: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-1704" for this suite. Apr 28 13:14:09.045: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:14:09.148: INFO: namespace proxy-1704 deletion completed in 6.135039871s • [SLOW TEST:6.280 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 13:14:09.148: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-20c63127-5cac-489d-ba4f-ea4b016c30e6 STEP: Creating a pod to test consume configMaps Apr 28 13:14:09.277: INFO: Waiting up to 5m0s for pod "pod-configmaps-9caac5d6-aada-4b86-997d-01c912217569" in namespace "configmap-5745" to be "success or failure" Apr 28 13:14:09.306: INFO: Pod "pod-configmaps-9caac5d6-aada-4b86-997d-01c912217569": Phase="Pending", Reason="", readiness=false. Elapsed: 28.744528ms Apr 28 13:14:11.309: INFO: Pod "pod-configmaps-9caac5d6-aada-4b86-997d-01c912217569": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032365998s Apr 28 13:14:13.314: INFO: Pod "pod-configmaps-9caac5d6-aada-4b86-997d-01c912217569": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036942017s STEP: Saw pod success Apr 28 13:14:13.314: INFO: Pod "pod-configmaps-9caac5d6-aada-4b86-997d-01c912217569" satisfied condition "success or failure" Apr 28 13:14:13.318: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-9caac5d6-aada-4b86-997d-01c912217569 container configmap-volume-test: STEP: delete the pod Apr 28 13:14:13.354: INFO: Waiting for pod pod-configmaps-9caac5d6-aada-4b86-997d-01c912217569 to disappear Apr 28 13:14:13.360: INFO: Pod pod-configmaps-9caac5d6-aada-4b86-997d-01c912217569 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 13:14:13.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5745" for this suite. Apr 28 13:14:19.395: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:14:19.468: INFO: namespace configmap-5745 deletion completed in 6.105086192s • [SLOW TEST:10.320 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 13:14:19.468: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 28 13:14:19.621: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Apr 28 13:14:19.636: INFO: Number of nodes with available pods: 0 Apr 28 13:14:19.636: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Apr 28 13:14:19.665: INFO: Number of nodes with available pods: 0 Apr 28 13:14:19.665: INFO: Node iruya-worker is running more than one daemon pod Apr 28 13:14:20.669: INFO: Number of nodes with available pods: 0 Apr 28 13:14:20.669: INFO: Node iruya-worker is running more than one daemon pod Apr 28 13:14:21.669: INFO: Number of nodes with available pods: 0 Apr 28 13:14:21.669: INFO: Node iruya-worker is running more than one daemon pod Apr 28 13:14:22.670: INFO: Number of nodes with available pods: 0 Apr 28 13:14:22.670: INFO: Node iruya-worker is running more than one daemon pod Apr 28 13:14:23.669: INFO: Number of nodes with available pods: 1 Apr 28 13:14:23.670: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Apr 28 13:14:23.718: INFO: Number of nodes with available pods: 1 Apr 28 13:14:23.718: INFO: Number of running nodes: 0, number of available pods: 1 Apr 28 13:14:24.723: INFO: Number of nodes with available pods: 0 Apr 28 13:14:24.723: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Apr 28 13:14:24.737: INFO: Number of nodes with available pods: 0 Apr 28 13:14:24.737: INFO: Node iruya-worker is running more than one daemon pod Apr 28 13:14:25.741: INFO: Number of nodes with available pods: 0 Apr 28 13:14:25.741: INFO: Node iruya-worker is running more than one daemon pod Apr 28 13:14:26.741: INFO: Number of nodes with available pods: 0 Apr 28 13:14:26.741: INFO: Node iruya-worker is running more than one daemon pod Apr 28 13:14:27.741: INFO: Number of nodes with available pods: 0 Apr 28 13:14:27.741: INFO: Node iruya-worker is running more than one daemon pod Apr 28 13:14:28.741: INFO: Number of nodes with available pods: 0 Apr 28 13:14:28.741: INFO: Node iruya-worker is running more than one daemon pod Apr 28 13:14:29.741: INFO: Number of nodes with available pods: 0 Apr 28 13:14:29.741: INFO: Node iruya-worker is running more than one daemon pod Apr 28 13:14:30.742: INFO: Number of nodes with available pods: 1 Apr 28 13:14:30.742: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-958, will wait for the garbage collector to delete the pods Apr 28 13:14:30.808: INFO: Deleting DaemonSet.extensions daemon-set took: 6.911216ms Apr 28 13:14:31.108: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.215448ms Apr 28 13:14:42.212: INFO: Number of nodes with available pods: 0 Apr 28 13:14:42.212: INFO: Number of running nodes: 0, number of available pods: 0 Apr 28 13:14:42.214: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-958/daemonsets","resourceVersion":"7896372"},"items":null} Apr 28 13:14:42.217: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-958/pods","resourceVersion":"7896372"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 13:14:42.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-958" for this suite. Apr 28 13:14:48.281: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:14:48.360: INFO: namespace daemonsets-958 deletion completed in 6.098366955s • [SLOW TEST:28.892 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 13:14:48.360: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Apr 28 13:14:56.530: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 28 13:14:56.590: INFO: Pod pod-with-prestop-exec-hook still exists Apr 28 13:14:58.591: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 28 13:14:58.595: INFO: Pod pod-with-prestop-exec-hook still exists Apr 28 13:15:00.591: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 28 13:15:00.595: INFO: Pod pod-with-prestop-exec-hook still exists Apr 28 13:15:02.591: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 28 13:15:02.594: INFO: Pod pod-with-prestop-exec-hook still exists Apr 28 13:15:04.591: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 28 13:15:04.594: INFO: Pod pod-with-prestop-exec-hook still exists Apr 28 13:15:06.591: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 28 13:15:06.595: INFO: Pod pod-with-prestop-exec-hook still exists Apr 28 13:15:08.591: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 28 13:15:08.595: INFO: Pod pod-with-prestop-exec-hook still exists Apr 28 13:15:10.591: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 28 13:15:10.594: INFO: Pod pod-with-prestop-exec-hook still exists Apr 28 13:15:12.591: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 28 13:15:12.595: INFO: Pod pod-with-prestop-exec-hook still exists Apr 28 13:15:14.591: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 28 13:15:14.595: INFO: Pod pod-with-prestop-exec-hook still exists Apr 28 13:15:16.591: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 28 13:15:16.594: INFO: Pod pod-with-prestop-exec-hook still exists Apr 28 13:15:18.591: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 28 13:15:18.595: INFO: Pod pod-with-prestop-exec-hook still exists Apr 28 13:15:20.591: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 28 13:15:20.594: INFO: Pod pod-with-prestop-exec-hook still exists Apr 28 13:15:22.591: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 28 13:15:22.594: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 13:15:22.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-8660" for this suite. Apr 28 13:15:44.622: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:15:44.700: INFO: namespace container-lifecycle-hook-8660 deletion completed in 22.094313751s • [SLOW TEST:56.339 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 13:15:44.700: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Apr 28 13:15:44.818: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 13:15:44.835: INFO: Number of nodes with available pods: 0 Apr 28 13:15:44.835: INFO: Node iruya-worker is running more than one daemon pod Apr 28 13:15:45.841: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 13:15:45.844: INFO: Number of nodes with available pods: 0 Apr 28 13:15:45.844: INFO: Node iruya-worker is running more than one daemon pod Apr 28 13:15:46.840: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 13:15:46.843: INFO: Number of nodes with available pods: 0 Apr 28 13:15:46.844: INFO: Node iruya-worker is running more than one daemon pod Apr 28 13:15:47.840: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 13:15:47.844: INFO: Number of nodes with available pods: 0 Apr 28 13:15:47.844: INFO: Node iruya-worker is running more than one daemon pod Apr 28 13:15:48.841: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 13:15:48.844: INFO: Number of nodes with available pods: 1 Apr 28 13:15:48.844: INFO: Node iruya-worker is running more than one daemon pod Apr 28 13:15:49.841: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 13:15:49.844: INFO: Number of nodes with available pods: 2 Apr 28 13:15:49.844: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Apr 28 13:15:49.860: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 13:15:49.863: INFO: Number of nodes with available pods: 1 Apr 28 13:15:49.863: INFO: Node iruya-worker is running more than one daemon pod Apr 28 13:15:50.868: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 13:15:50.872: INFO: Number of nodes with available pods: 1 Apr 28 13:15:50.872: INFO: Node iruya-worker is running more than one daemon pod Apr 28 13:15:51.868: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 13:15:51.873: INFO: Number of nodes with available pods: 1 Apr 28 13:15:51.873: INFO: Node iruya-worker is running more than one daemon pod Apr 28 13:15:52.868: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 13:15:52.871: INFO: Number of nodes with available pods: 1 Apr 28 13:15:52.871: INFO: Node iruya-worker is running more than one daemon pod Apr 28 13:15:53.868: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 13:15:53.871: INFO: Number of nodes with available pods: 1 Apr 28 13:15:53.871: INFO: Node iruya-worker is running more than one daemon pod Apr 28 13:15:54.868: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 13:15:54.871: INFO: Number of nodes with available pods: 1 Apr 28 13:15:54.871: INFO: Node iruya-worker is running more than one daemon pod Apr 28 13:15:55.868: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 13:15:55.872: INFO: Number of nodes with available pods: 1 Apr 28 13:15:55.872: INFO: Node iruya-worker is running more than one daemon pod Apr 28 13:15:56.868: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 13:15:56.872: INFO: Number of nodes with available pods: 2 Apr 28 13:15:56.872: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6738, will wait for the garbage collector to delete the pods Apr 28 13:15:56.935: INFO: Deleting DaemonSet.extensions daemon-set took: 6.830584ms Apr 28 13:15:57.235: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.305808ms Apr 28 13:16:02.239: INFO: Number of nodes with available pods: 0 Apr 28 13:16:02.239: INFO: Number of running nodes: 0, number of available pods: 0 Apr 28 13:16:02.241: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6738/daemonsets","resourceVersion":"7896646"},"items":null} Apr 28 13:16:02.244: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6738/pods","resourceVersion":"7896646"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 13:16:02.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6738" for this suite. Apr 28 13:16:08.263: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:16:08.333: INFO: namespace daemonsets-6738 deletion completed in 6.078487402s • [SLOW TEST:23.633 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 13:16:08.333: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Apr 28 13:16:16.421: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 28 13:16:16.435: INFO: Pod pod-with-prestop-http-hook still exists Apr 28 13:16:18.436: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 28 13:16:18.440: INFO: Pod pod-with-prestop-http-hook still exists Apr 28 13:16:20.436: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 28 13:16:20.442: INFO: Pod pod-with-prestop-http-hook still exists Apr 28 13:16:22.436: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 28 13:16:22.440: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 13:16:22.448: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-2295" for this suite. Apr 28 13:16:44.469: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:16:44.572: INFO: namespace container-lifecycle-hook-2295 deletion completed in 22.120402785s • [SLOW TEST:36.239 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 13:16:44.572: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-9447 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 28 13:16:44.673: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Apr 28 13:17:12.780: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.236:8080/dial?request=hostName&protocol=udp&host=10.244.2.37&port=8081&tries=1'] Namespace:pod-network-test-9447 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 28 13:17:12.780: INFO: >>> kubeConfig: /root/.kube/config I0428 13:17:12.820049 6 log.go:172] (0xc002196210) (0xc00236d680) Create stream I0428 13:17:12.820086 6 log.go:172] (0xc002196210) (0xc00236d680) Stream added, broadcasting: 1 I0428 13:17:12.822460 6 log.go:172] (0xc002196210) Reply frame received for 1 I0428 13:17:12.822501 6 log.go:172] (0xc002196210) (0xc00236d720) Create stream I0428 13:17:12.822517 6 log.go:172] (0xc002196210) (0xc00236d720) Stream added, broadcasting: 3 I0428 13:17:12.823263 6 log.go:172] (0xc002196210) Reply frame received for 3 I0428 13:17:12.823297 6 log.go:172] (0xc002196210) (0xc0029754a0) Create stream I0428 13:17:12.823307 6 log.go:172] (0xc002196210) (0xc0029754a0) Stream added, broadcasting: 5 I0428 13:17:12.824037 6 log.go:172] (0xc002196210) Reply frame received for 5 I0428 13:17:12.901777 6 log.go:172] (0xc002196210) Data frame received for 3 I0428 13:17:12.901812 6 log.go:172] (0xc00236d720) (3) Data frame handling I0428 13:17:12.901839 6 log.go:172] (0xc00236d720) (3) Data frame sent I0428 13:17:12.902109 6 log.go:172] (0xc002196210) Data frame received for 5 I0428 13:17:12.902146 6 log.go:172] (0xc0029754a0) (5) Data frame handling I0428 13:17:12.902170 6 log.go:172] (0xc002196210) Data frame received for 3 I0428 13:17:12.902180 6 log.go:172] (0xc00236d720) (3) Data frame handling I0428 13:17:12.903981 6 log.go:172] (0xc002196210) Data frame received for 1 I0428 13:17:12.904074 6 log.go:172] (0xc00236d680) (1) Data frame handling I0428 13:17:12.904149 6 log.go:172] (0xc00236d680) (1) Data frame sent I0428 13:17:12.904175 6 log.go:172] (0xc002196210) (0xc00236d680) Stream removed, broadcasting: 1 I0428 13:17:12.904192 6 log.go:172] (0xc002196210) Go away received I0428 13:17:12.904458 6 log.go:172] (0xc002196210) (0xc00236d680) Stream removed, broadcasting: 1 I0428 13:17:12.904494 6 log.go:172] (0xc002196210) (0xc00236d720) Stream removed, broadcasting: 3 I0428 13:17:12.904508 6 log.go:172] (0xc002196210) (0xc0029754a0) Stream removed, broadcasting: 5 Apr 28 13:17:12.904: INFO: Waiting for endpoints: map[] Apr 28 13:17:12.908: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.236:8080/dial?request=hostName&protocol=udp&host=10.244.1.235&port=8081&tries=1'] Namespace:pod-network-test-9447 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 28 13:17:12.908: INFO: >>> kubeConfig: /root/.kube/config I0428 13:17:12.938827 6 log.go:172] (0xc000a24fd0) (0xc00166c140) Create stream I0428 13:17:12.938855 6 log.go:172] (0xc000a24fd0) (0xc00166c140) Stream added, broadcasting: 1 I0428 13:17:12.941852 6 log.go:172] (0xc000a24fd0) Reply frame received for 1 I0428 13:17:12.941888 6 log.go:172] (0xc000a24fd0) (0xc0019d1a40) Create stream I0428 13:17:12.941899 6 log.go:172] (0xc000a24fd0) (0xc0019d1a40) Stream added, broadcasting: 3 I0428 13:17:12.942978 6 log.go:172] (0xc000a24fd0) Reply frame received for 3 I0428 13:17:12.943038 6 log.go:172] (0xc000a24fd0) (0xc0019d1ae0) Create stream I0428 13:17:12.943061 6 log.go:172] (0xc000a24fd0) (0xc0019d1ae0) Stream added, broadcasting: 5 I0428 13:17:12.944167 6 log.go:172] (0xc000a24fd0) Reply frame received for 5 I0428 13:17:13.029297 6 log.go:172] (0xc000a24fd0) Data frame received for 3 I0428 13:17:13.029332 6 log.go:172] (0xc0019d1a40) (3) Data frame handling I0428 13:17:13.029348 6 log.go:172] (0xc0019d1a40) (3) Data frame sent I0428 13:17:13.029942 6 log.go:172] (0xc000a24fd0) Data frame received for 3 I0428 13:17:13.029968 6 log.go:172] (0xc0019d1a40) (3) Data frame handling I0428 13:17:13.030049 6 log.go:172] (0xc000a24fd0) Data frame received for 5 I0428 13:17:13.030078 6 log.go:172] (0xc0019d1ae0) (5) Data frame handling I0428 13:17:13.031398 6 log.go:172] (0xc000a24fd0) Data frame received for 1 I0428 13:17:13.031413 6 log.go:172] (0xc00166c140) (1) Data frame handling I0428 13:17:13.031424 6 log.go:172] (0xc00166c140) (1) Data frame sent I0428 13:17:13.031462 6 log.go:172] (0xc000a24fd0) (0xc00166c140) Stream removed, broadcasting: 1 I0428 13:17:13.031486 6 log.go:172] (0xc000a24fd0) Go away received I0428 13:17:13.031578 6 log.go:172] (0xc000a24fd0) (0xc00166c140) Stream removed, broadcasting: 1 I0428 13:17:13.031595 6 log.go:172] (0xc000a24fd0) (0xc0019d1a40) Stream removed, broadcasting: 3 I0428 13:17:13.031602 6 log.go:172] (0xc000a24fd0) (0xc0019d1ae0) Stream removed, broadcasting: 5 Apr 28 13:17:13.031: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 13:17:13.031: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-9447" for this suite. Apr 28 13:17:35.049: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:17:35.146: INFO: namespace pod-network-test-9447 deletion completed in 22.110821837s • [SLOW TEST:50.573 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 13:17:35.146: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Apr 28 13:17:35.233: INFO: Waiting up to 5m0s for pod "downward-api-0398e65b-0030-4e6a-98f0-884f0efd238b" in namespace "downward-api-6522" to be "success or failure" Apr 28 13:17:35.261: INFO: Pod "downward-api-0398e65b-0030-4e6a-98f0-884f0efd238b": Phase="Pending", Reason="", readiness=false. Elapsed: 28.197164ms Apr 28 13:17:37.265: INFO: Pod "downward-api-0398e65b-0030-4e6a-98f0-884f0efd238b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032273843s Apr 28 13:17:39.269: INFO: Pod "downward-api-0398e65b-0030-4e6a-98f0-884f0efd238b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036380961s STEP: Saw pod success Apr 28 13:17:39.270: INFO: Pod "downward-api-0398e65b-0030-4e6a-98f0-884f0efd238b" satisfied condition "success or failure" Apr 28 13:17:39.272: INFO: Trying to get logs from node iruya-worker2 pod downward-api-0398e65b-0030-4e6a-98f0-884f0efd238b container dapi-container: STEP: delete the pod Apr 28 13:17:39.293: INFO: Waiting for pod downward-api-0398e65b-0030-4e6a-98f0-884f0efd238b to disappear Apr 28 13:17:39.327: INFO: Pod downward-api-0398e65b-0030-4e6a-98f0-884f0efd238b no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 13:17:39.327: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6522" for this suite. Apr 28 13:17:45.355: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:17:45.430: INFO: namespace downward-api-6522 deletion completed in 6.099673145s • [SLOW TEST:10.284 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 13:17:45.430: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0428 13:17:46.632012 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 28 13:17:46.632: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 13:17:46.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8511" for this suite. Apr 28 13:17:52.666: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:17:52.830: INFO: namespace gc-8511 deletion completed in 6.193832268s • [SLOW TEST:7.400 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 13:17:52.830: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-projected-all-test-volume-bc289d3d-6155-4c6f-9920-23d6de6279e2 STEP: Creating secret with name secret-projected-all-test-volume-5fe16862-718f-4a85-9eca-e01991d56672 STEP: Creating a pod to test Check all projections for projected volume plugin Apr 28 13:17:52.933: INFO: Waiting up to 5m0s for pod "projected-volume-7191dd75-e2ce-44c9-96ef-f462cc898e04" in namespace "projected-2750" to be "success or failure" Apr 28 13:17:52.957: INFO: Pod "projected-volume-7191dd75-e2ce-44c9-96ef-f462cc898e04": Phase="Pending", Reason="", readiness=false. Elapsed: 23.127139ms Apr 28 13:17:54.980: INFO: Pod "projected-volume-7191dd75-e2ce-44c9-96ef-f462cc898e04": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046779046s Apr 28 13:17:56.985: INFO: Pod "projected-volume-7191dd75-e2ce-44c9-96ef-f462cc898e04": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.051318509s STEP: Saw pod success Apr 28 13:17:56.985: INFO: Pod "projected-volume-7191dd75-e2ce-44c9-96ef-f462cc898e04" satisfied condition "success or failure" Apr 28 13:17:56.988: INFO: Trying to get logs from node iruya-worker pod projected-volume-7191dd75-e2ce-44c9-96ef-f462cc898e04 container projected-all-volume-test: STEP: delete the pod Apr 28 13:17:57.008: INFO: Waiting for pod projected-volume-7191dd75-e2ce-44c9-96ef-f462cc898e04 to disappear Apr 28 13:17:57.013: INFO: Pod projected-volume-7191dd75-e2ce-44c9-96ef-f462cc898e04 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 13:17:57.013: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2750" for this suite. Apr 28 13:18:03.028: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:18:03.102: INFO: namespace projected-2750 deletion completed in 6.085138179s • [SLOW TEST:10.272 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 13:18:03.102: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting the proxy server Apr 28 13:18:03.187: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 13:18:03.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7598" for this suite. Apr 28 13:18:09.323: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:18:09.394: INFO: namespace kubectl-7598 deletion completed in 6.115554774s • [SLOW TEST:6.293 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 13:18:09.395: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Apr 28 13:18:17.613: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 28 13:18:17.645: INFO: Pod pod-with-poststart-exec-hook still exists Apr 28 13:18:19.646: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 28 13:18:19.650: INFO: Pod pod-with-poststart-exec-hook still exists Apr 28 13:18:21.646: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 28 13:18:21.650: INFO: Pod pod-with-poststart-exec-hook still exists Apr 28 13:18:23.646: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 28 13:18:23.650: INFO: Pod pod-with-poststart-exec-hook still exists Apr 28 13:18:25.646: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 28 13:18:25.649: INFO: Pod pod-with-poststart-exec-hook still exists Apr 28 13:18:27.646: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 28 13:18:27.650: INFO: Pod pod-with-poststart-exec-hook still exists Apr 28 13:18:29.646: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 28 13:18:29.649: INFO: Pod pod-with-poststart-exec-hook still exists Apr 28 13:18:31.646: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 28 13:18:31.650: INFO: Pod pod-with-poststart-exec-hook still exists Apr 28 13:18:33.646: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 28 13:18:33.653: INFO: Pod pod-with-poststart-exec-hook still exists Apr 28 13:18:35.646: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 28 13:18:35.650: INFO: Pod pod-with-poststart-exec-hook still exists Apr 28 13:18:37.646: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 28 13:18:37.666: INFO: Pod pod-with-poststart-exec-hook still exists Apr 28 13:18:39.646: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 28 13:18:39.671: INFO: Pod pod-with-poststart-exec-hook still exists Apr 28 13:18:41.646: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 28 13:18:41.650: INFO: Pod pod-with-poststart-exec-hook still exists Apr 28 13:18:43.646: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 28 13:18:43.650: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 13:18:43.650: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-1859" for this suite. Apr 28 13:19:05.689: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:19:05.790: INFO: namespace container-lifecycle-hook-1859 deletion completed in 22.135463823s • [SLOW TEST:56.396 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 13:19:05.791: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir volume type on node default medium Apr 28 13:19:05.867: INFO: Waiting up to 5m0s for pod "pod-1235df27-3869-4907-a3d1-f8ec8664fc5e" in namespace "emptydir-5156" to be "success or failure" Apr 28 13:19:05.877: INFO: Pod "pod-1235df27-3869-4907-a3d1-f8ec8664fc5e": Phase="Pending", Reason="", readiness=false. Elapsed: 9.96018ms Apr 28 13:19:07.910: INFO: Pod "pod-1235df27-3869-4907-a3d1-f8ec8664fc5e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042665421s Apr 28 13:19:09.915: INFO: Pod "pod-1235df27-3869-4907-a3d1-f8ec8664fc5e": Phase="Running", Reason="", readiness=true. Elapsed: 4.047629239s Apr 28 13:19:11.920: INFO: Pod "pod-1235df27-3869-4907-a3d1-f8ec8664fc5e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.052572873s STEP: Saw pod success Apr 28 13:19:11.920: INFO: Pod "pod-1235df27-3869-4907-a3d1-f8ec8664fc5e" satisfied condition "success or failure" Apr 28 13:19:11.923: INFO: Trying to get logs from node iruya-worker2 pod pod-1235df27-3869-4907-a3d1-f8ec8664fc5e container test-container: STEP: delete the pod Apr 28 13:19:11.965: INFO: Waiting for pod pod-1235df27-3869-4907-a3d1-f8ec8664fc5e to disappear Apr 28 13:19:12.145: INFO: Pod pod-1235df27-3869-4907-a3d1-f8ec8664fc5e no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 13:19:12.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5156" for this suite. Apr 28 13:19:18.173: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:19:18.274: INFO: namespace emptydir-5156 deletion completed in 6.12508445s • [SLOW TEST:12.483 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 13:19:18.275: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on tmpfs Apr 28 13:19:18.362: INFO: Waiting up to 5m0s for pod "pod-9e6ce930-adbc-4925-8d9e-9547cdb06895" in namespace "emptydir-8315" to be "success or failure" Apr 28 13:19:18.365: INFO: Pod "pod-9e6ce930-adbc-4925-8d9e-9547cdb06895": Phase="Pending", Reason="", readiness=false. Elapsed: 3.198358ms Apr 28 13:19:20.369: INFO: Pod "pod-9e6ce930-adbc-4925-8d9e-9547cdb06895": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007757489s Apr 28 13:19:22.374: INFO: Pod "pod-9e6ce930-adbc-4925-8d9e-9547cdb06895": Phase="Running", Reason="", readiness=true. Elapsed: 4.012114026s Apr 28 13:19:24.378: INFO: Pod "pod-9e6ce930-adbc-4925-8d9e-9547cdb06895": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016443346s STEP: Saw pod success Apr 28 13:19:24.378: INFO: Pod "pod-9e6ce930-adbc-4925-8d9e-9547cdb06895" satisfied condition "success or failure" Apr 28 13:19:24.381: INFO: Trying to get logs from node iruya-worker pod pod-9e6ce930-adbc-4925-8d9e-9547cdb06895 container test-container: STEP: delete the pod Apr 28 13:19:24.403: INFO: Waiting for pod pod-9e6ce930-adbc-4925-8d9e-9547cdb06895 to disappear Apr 28 13:19:24.406: INFO: Pod pod-9e6ce930-adbc-4925-8d9e-9547cdb06895 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 13:19:24.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8315" for this suite. Apr 28 13:19:30.422: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:19:30.505: INFO: namespace emptydir-8315 deletion completed in 6.09542434s • [SLOW TEST:12.230 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 13:19:30.505: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's command Apr 28 13:19:30.614: INFO: Waiting up to 5m0s for pod "var-expansion-b7c1b2cf-d4b4-4749-9739-2a08a4823e8b" in namespace "var-expansion-8097" to be "success or failure" Apr 28 13:19:30.626: INFO: Pod "var-expansion-b7c1b2cf-d4b4-4749-9739-2a08a4823e8b": Phase="Pending", Reason="", readiness=false. Elapsed: 12.717626ms Apr 28 13:19:32.631: INFO: Pod "var-expansion-b7c1b2cf-d4b4-4749-9739-2a08a4823e8b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01706168s Apr 28 13:19:34.635: INFO: Pod "var-expansion-b7c1b2cf-d4b4-4749-9739-2a08a4823e8b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021486584s STEP: Saw pod success Apr 28 13:19:34.635: INFO: Pod "var-expansion-b7c1b2cf-d4b4-4749-9739-2a08a4823e8b" satisfied condition "success or failure" Apr 28 13:19:34.638: INFO: Trying to get logs from node iruya-worker2 pod var-expansion-b7c1b2cf-d4b4-4749-9739-2a08a4823e8b container dapi-container: STEP: delete the pod Apr 28 13:19:34.676: INFO: Waiting for pod var-expansion-b7c1b2cf-d4b4-4749-9739-2a08a4823e8b to disappear Apr 28 13:19:34.692: INFO: Pod var-expansion-b7c1b2cf-d4b4-4749-9739-2a08a4823e8b no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 13:19:34.692: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-8097" for this suite. Apr 28 13:19:40.720: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:19:40.852: INFO: namespace var-expansion-8097 deletion completed in 6.157082058s • [SLOW TEST:10.347 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 13:19:40.853: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-3000 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 28 13:19:40.922: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Apr 28 13:20:03.021: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.42 8081 | grep -v '^\s*$'] Namespace:pod-network-test-3000 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 28 13:20:03.021: INFO: >>> kubeConfig: /root/.kube/config I0428 13:20:03.056828 6 log.go:172] (0xc0008e0bb0) (0xc001fbc140) Create stream I0428 13:20:03.056861 6 log.go:172] (0xc0008e0bb0) (0xc001fbc140) Stream added, broadcasting: 1 I0428 13:20:03.059102 6 log.go:172] (0xc0008e0bb0) Reply frame received for 1 I0428 13:20:03.059166 6 log.go:172] (0xc0008e0bb0) (0xc00224e780) Create stream I0428 13:20:03.059184 6 log.go:172] (0xc0008e0bb0) (0xc00224e780) Stream added, broadcasting: 3 I0428 13:20:03.060195 6 log.go:172] (0xc0008e0bb0) Reply frame received for 3 I0428 13:20:03.060221 6 log.go:172] (0xc0008e0bb0) (0xc000a82a00) Create stream I0428 13:20:03.060230 6 log.go:172] (0xc0008e0bb0) (0xc000a82a00) Stream added, broadcasting: 5 I0428 13:20:03.061257 6 log.go:172] (0xc0008e0bb0) Reply frame received for 5 I0428 13:20:04.146444 6 log.go:172] (0xc0008e0bb0) Data frame received for 3 I0428 13:20:04.146481 6 log.go:172] (0xc00224e780) (3) Data frame handling I0428 13:20:04.146495 6 log.go:172] (0xc00224e780) (3) Data frame sent I0428 13:20:04.146628 6 log.go:172] (0xc0008e0bb0) Data frame received for 5 I0428 13:20:04.146645 6 log.go:172] (0xc000a82a00) (5) Data frame handling I0428 13:20:04.146703 6 log.go:172] (0xc0008e0bb0) Data frame received for 3 I0428 13:20:04.146743 6 log.go:172] (0xc00224e780) (3) Data frame handling I0428 13:20:04.148395 6 log.go:172] (0xc0008e0bb0) Data frame received for 1 I0428 13:20:04.148433 6 log.go:172] (0xc001fbc140) (1) Data frame handling I0428 13:20:04.148458 6 log.go:172] (0xc001fbc140) (1) Data frame sent I0428 13:20:04.148485 6 log.go:172] (0xc0008e0bb0) (0xc001fbc140) Stream removed, broadcasting: 1 I0428 13:20:04.148509 6 log.go:172] (0xc0008e0bb0) Go away received I0428 13:20:04.148684 6 log.go:172] (0xc0008e0bb0) (0xc001fbc140) Stream removed, broadcasting: 1 I0428 13:20:04.148716 6 log.go:172] (0xc0008e0bb0) (0xc00224e780) Stream removed, broadcasting: 3 I0428 13:20:04.148730 6 log.go:172] (0xc0008e0bb0) (0xc000a82a00) Stream removed, broadcasting: 5 Apr 28 13:20:04.148: INFO: Found all expected endpoints: [netserver-0] Apr 28 13:20:04.152: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.242 8081 | grep -v '^\s*$'] Namespace:pod-network-test-3000 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 28 13:20:04.152: INFO: >>> kubeConfig: /root/.kube/config I0428 13:20:04.182404 6 log.go:172] (0xc0018b64d0) (0xc0022c25a0) Create stream I0428 13:20:04.182431 6 log.go:172] (0xc0018b64d0) (0xc0022c25a0) Stream added, broadcasting: 1 I0428 13:20:04.184345 6 log.go:172] (0xc0018b64d0) Reply frame received for 1 I0428 13:20:04.184387 6 log.go:172] (0xc0018b64d0) (0xc001fbc1e0) Create stream I0428 13:20:04.184400 6 log.go:172] (0xc0018b64d0) (0xc001fbc1e0) Stream added, broadcasting: 3 I0428 13:20:04.185457 6 log.go:172] (0xc0018b64d0) Reply frame received for 3 I0428 13:20:04.185494 6 log.go:172] (0xc0018b64d0) (0xc00224e8c0) Create stream I0428 13:20:04.185513 6 log.go:172] (0xc0018b64d0) (0xc00224e8c0) Stream added, broadcasting: 5 I0428 13:20:04.186543 6 log.go:172] (0xc0018b64d0) Reply frame received for 5 I0428 13:20:05.270130 6 log.go:172] (0xc0018b64d0) Data frame received for 5 I0428 13:20:05.270196 6 log.go:172] (0xc00224e8c0) (5) Data frame handling I0428 13:20:05.270267 6 log.go:172] (0xc0018b64d0) Data frame received for 3 I0428 13:20:05.270289 6 log.go:172] (0xc001fbc1e0) (3) Data frame handling I0428 13:20:05.270311 6 log.go:172] (0xc001fbc1e0) (3) Data frame sent I0428 13:20:05.270412 6 log.go:172] (0xc0018b64d0) Data frame received for 3 I0428 13:20:05.270444 6 log.go:172] (0xc001fbc1e0) (3) Data frame handling I0428 13:20:05.273046 6 log.go:172] (0xc0018b64d0) Data frame received for 1 I0428 13:20:05.273071 6 log.go:172] (0xc0022c25a0) (1) Data frame handling I0428 13:20:05.273085 6 log.go:172] (0xc0022c25a0) (1) Data frame sent I0428 13:20:05.273101 6 log.go:172] (0xc0018b64d0) (0xc0022c25a0) Stream removed, broadcasting: 1 I0428 13:20:05.273376 6 log.go:172] (0xc0018b64d0) (0xc0022c25a0) Stream removed, broadcasting: 1 I0428 13:20:05.273406 6 log.go:172] (0xc0018b64d0) (0xc001fbc1e0) Stream removed, broadcasting: 3 I0428 13:20:05.273430 6 log.go:172] (0xc0018b64d0) (0xc00224e8c0) Stream removed, broadcasting: 5 Apr 28 13:20:05.273: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 13:20:05.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready I0428 13:20:05.273584 6 log.go:172] (0xc0018b64d0) Go away received STEP: Destroying namespace "pod-network-test-3000" for this suite. Apr 28 13:20:29.298: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:20:29.378: INFO: namespace pod-network-test-3000 deletion completed in 24.099873008s • [SLOW TEST:48.525 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 13:20:29.379: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Apr 28 13:20:29.440: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 28 13:20:29.452: INFO: Waiting for terminating namespaces to be deleted... Apr 28 13:20:29.455: INFO: Logging pods the kubelet thinks is on node iruya-worker before test Apr 28 13:20:29.462: INFO: kube-proxy-pmz4p from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Apr 28 13:20:29.462: INFO: Container kube-proxy ready: true, restart count 0 Apr 28 13:20:29.462: INFO: kindnet-gwz5g from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Apr 28 13:20:29.462: INFO: Container kindnet-cni ready: true, restart count 0 Apr 28 13:20:29.462: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test Apr 28 13:20:29.468: INFO: kube-proxy-vwbcj from kube-system started at 2020-03-15 18:24:42 +0000 UTC (1 container statuses recorded) Apr 28 13:20:29.468: INFO: Container kube-proxy ready: true, restart count 0 Apr 28 13:20:29.468: INFO: kindnet-mgd8b from kube-system started at 2020-03-15 18:24:43 +0000 UTC (1 container statuses recorded) Apr 28 13:20:29.468: INFO: Container kindnet-cni ready: true, restart count 0 Apr 28 13:20:29.468: INFO: coredns-5d4dd4b4db-gm7vr from kube-system started at 2020-03-15 18:24:52 +0000 UTC (1 container statuses recorded) Apr 28 13:20:29.468: INFO: Container coredns ready: true, restart count 0 Apr 28 13:20:29.468: INFO: coredns-5d4dd4b4db-6jcgz from kube-system started at 2020-03-15 18:24:54 +0000 UTC (1 container statuses recorded) Apr 28 13:20:29.468: INFO: Container coredns ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-775b17ce-926a-440f-9bad-ac6978548027 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-775b17ce-926a-440f-9bad-ac6978548027 off the node iruya-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-775b17ce-926a-440f-9bad-ac6978548027 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 13:20:37.622: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-8485" for this suite. Apr 28 13:20:55.645: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:20:55.728: INFO: namespace sched-pred-8485 deletion completed in 18.102398501s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:26.350 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 13:20:55.729: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on node default medium Apr 28 13:20:55.818: INFO: Waiting up to 5m0s for pod "pod-2459c1d6-b3e3-4b7d-a908-058e7e4add2e" in namespace "emptydir-9984" to be "success or failure" Apr 28 13:20:55.822: INFO: Pod "pod-2459c1d6-b3e3-4b7d-a908-058e7e4add2e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.437955ms Apr 28 13:20:57.826: INFO: Pod "pod-2459c1d6-b3e3-4b7d-a908-058e7e4add2e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007625868s Apr 28 13:20:59.830: INFO: Pod "pod-2459c1d6-b3e3-4b7d-a908-058e7e4add2e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011526423s STEP: Saw pod success Apr 28 13:20:59.830: INFO: Pod "pod-2459c1d6-b3e3-4b7d-a908-058e7e4add2e" satisfied condition "success or failure" Apr 28 13:20:59.833: INFO: Trying to get logs from node iruya-worker pod pod-2459c1d6-b3e3-4b7d-a908-058e7e4add2e container test-container: STEP: delete the pod Apr 28 13:20:59.857: INFO: Waiting for pod pod-2459c1d6-b3e3-4b7d-a908-058e7e4add2e to disappear Apr 28 13:20:59.861: INFO: Pod pod-2459c1d6-b3e3-4b7d-a908-058e7e4add2e no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 13:20:59.861: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9984" for this suite. Apr 28 13:21:05.884: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:21:05.957: INFO: namespace emptydir-9984 deletion completed in 6.0923437s • [SLOW TEST:10.229 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 13:21:05.958: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-map-19240325-5e83-4d44-b9d6-04d8a6871d2e STEP: Creating a pod to test consume secrets Apr 28 13:21:06.035: INFO: Waiting up to 5m0s for pod "pod-secrets-187d05d6-32f3-412d-886f-02bb5d9362f1" in namespace "secrets-1804" to be "success or failure" Apr 28 13:21:06.068: INFO: Pod "pod-secrets-187d05d6-32f3-412d-886f-02bb5d9362f1": Phase="Pending", Reason="", readiness=false. Elapsed: 32.436004ms Apr 28 13:21:08.072: INFO: Pod "pod-secrets-187d05d6-32f3-412d-886f-02bb5d9362f1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036467486s Apr 28 13:21:10.076: INFO: Pod "pod-secrets-187d05d6-32f3-412d-886f-02bb5d9362f1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.040558683s STEP: Saw pod success Apr 28 13:21:10.076: INFO: Pod "pod-secrets-187d05d6-32f3-412d-886f-02bb5d9362f1" satisfied condition "success or failure" Apr 28 13:21:10.079: INFO: Trying to get logs from node iruya-worker pod pod-secrets-187d05d6-32f3-412d-886f-02bb5d9362f1 container secret-volume-test: STEP: delete the pod Apr 28 13:21:10.103: INFO: Waiting for pod pod-secrets-187d05d6-32f3-412d-886f-02bb5d9362f1 to disappear Apr 28 13:21:10.107: INFO: Pod pod-secrets-187d05d6-32f3-412d-886f-02bb5d9362f1 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 13:21:10.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1804" for this suite. Apr 28 13:21:16.135: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:21:16.210: INFO: namespace secrets-1804 deletion completed in 6.09923177s • [SLOW TEST:10.253 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 13:21:16.211: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 13:21:21.669: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-3783" for this suite. Apr 28 13:21:27.820: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:21:27.898: INFO: namespace watch-3783 deletion completed in 6.18026738s • [SLOW TEST:11.687 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 13:21:27.899: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service endpoint-test2 in namespace services-38 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-38 to expose endpoints map[] Apr 28 13:21:27.990: INFO: Get endpoints failed (17.629206ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Apr 28 13:21:28.996: INFO: successfully validated that service endpoint-test2 in namespace services-38 exposes endpoints map[] (1.023382345s elapsed) STEP: Creating pod pod1 in namespace services-38 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-38 to expose endpoints map[pod1:[80]] Apr 28 13:21:33.073: INFO: successfully validated that service endpoint-test2 in namespace services-38 exposes endpoints map[pod1:[80]] (4.070830297s elapsed) STEP: Creating pod pod2 in namespace services-38 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-38 to expose endpoints map[pod1:[80] pod2:[80]] Apr 28 13:21:36.176: INFO: successfully validated that service endpoint-test2 in namespace services-38 exposes endpoints map[pod1:[80] pod2:[80]] (3.099087075s elapsed) STEP: Deleting pod pod1 in namespace services-38 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-38 to expose endpoints map[pod2:[80]] Apr 28 13:21:37.202: INFO: successfully validated that service endpoint-test2 in namespace services-38 exposes endpoints map[pod2:[80]] (1.022214785s elapsed) STEP: Deleting pod pod2 in namespace services-38 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-38 to expose endpoints map[] Apr 28 13:21:38.360: INFO: successfully validated that service endpoint-test2 in namespace services-38 exposes endpoints map[] (1.123756093s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 13:21:38.375: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-38" for this suite. Apr 28 13:21:44.387: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:21:44.467: INFO: namespace services-38 deletion completed in 6.088894238s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:16.568 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 13:21:44.467: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 28 13:21:44.573: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b91b3114-1039-4c3f-ae9d-2e7d5215af6e" in namespace "projected-6508" to be "success or failure" Apr 28 13:21:44.588: INFO: Pod "downwardapi-volume-b91b3114-1039-4c3f-ae9d-2e7d5215af6e": Phase="Pending", Reason="", readiness=false. Elapsed: 14.722153ms Apr 28 13:21:46.592: INFO: Pod "downwardapi-volume-b91b3114-1039-4c3f-ae9d-2e7d5215af6e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019150096s Apr 28 13:21:48.597: INFO: Pod "downwardapi-volume-b91b3114-1039-4c3f-ae9d-2e7d5215af6e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023855686s STEP: Saw pod success Apr 28 13:21:48.597: INFO: Pod "downwardapi-volume-b91b3114-1039-4c3f-ae9d-2e7d5215af6e" satisfied condition "success or failure" Apr 28 13:21:48.600: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-b91b3114-1039-4c3f-ae9d-2e7d5215af6e container client-container: STEP: delete the pod Apr 28 13:21:48.651: INFO: Waiting for pod downwardapi-volume-b91b3114-1039-4c3f-ae9d-2e7d5215af6e to disappear Apr 28 13:21:48.655: INFO: Pod downwardapi-volume-b91b3114-1039-4c3f-ae9d-2e7d5215af6e no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 13:21:48.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6508" for this suite. Apr 28 13:21:54.700: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:21:54.816: INFO: namespace projected-6508 deletion completed in 6.157362607s • [SLOW TEST:10.348 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 13:21:54.816: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: validating api versions Apr 28 13:21:54.908: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' Apr 28 13:21:55.064: INFO: stderr: "" Apr 28 13:21:55.064: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 13:21:55.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1185" for this suite. Apr 28 13:22:01.095: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:22:01.172: INFO: namespace kubectl-1185 deletion completed in 6.102369759s • [SLOW TEST:6.356 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl api-versions /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 13:22:01.172: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating all guestbook components Apr 28 13:22:01.280: INFO: apiVersion: v1 kind: Service metadata: name: redis-slave labels: app: redis role: slave tier: backend spec: ports: - port: 6379 selector: app: redis role: slave tier: backend Apr 28 13:22:01.280: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9144' Apr 28 13:22:04.036: INFO: stderr: "" Apr 28 13:22:04.036: INFO: stdout: "service/redis-slave created\n" Apr 28 13:22:04.036: INFO: apiVersion: v1 kind: Service metadata: name: redis-master labels: app: redis role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: redis role: master tier: backend Apr 28 13:22:04.036: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9144' Apr 28 13:22:04.348: INFO: stderr: "" Apr 28 13:22:04.348: INFO: stdout: "service/redis-master created\n" Apr 28 13:22:04.348: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Apr 28 13:22:04.348: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9144' Apr 28 13:22:04.643: INFO: stderr: "" Apr 28 13:22:04.643: INFO: stdout: "service/frontend created\n" Apr 28 13:22:04.643: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v6 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access environment variables to find service host # info, comment out the 'value: dns' line above, and uncomment the # line below: # value: env ports: - containerPort: 80 Apr 28 13:22:04.643: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9144' Apr 28 13:22:04.942: INFO: stderr: "" Apr 28 13:22:04.942: INFO: stdout: "deployment.apps/frontend created\n" Apr 28 13:22:04.942: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-master spec: replicas: 1 selector: matchLabels: app: redis role: master tier: backend template: metadata: labels: app: redis role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/redis:1.0 resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Apr 28 13:22:04.942: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9144' Apr 28 13:22:05.293: INFO: stderr: "" Apr 28 13:22:05.294: INFO: stdout: "deployment.apps/redis-master created\n" Apr 28 13:22:05.294: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-slave spec: replicas: 2 selector: matchLabels: app: redis role: slave tier: backend template: metadata: labels: app: redis role: slave tier: backend spec: containers: - name: slave image: gcr.io/google-samples/gb-redisslave:v3 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access an environment variable to find the master # service's host, comment out the 'value: dns' line above, and # uncomment the line below: # value: env ports: - containerPort: 6379 Apr 28 13:22:05.294: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9144' Apr 28 13:22:05.552: INFO: stderr: "" Apr 28 13:22:05.552: INFO: stdout: "deployment.apps/redis-slave created\n" STEP: validating guestbook app Apr 28 13:22:05.553: INFO: Waiting for all frontend pods to be Running. Apr 28 13:22:15.603: INFO: Waiting for frontend to serve content. Apr 28 13:22:15.624: INFO: Trying to add a new entry to the guestbook. Apr 28 13:22:15.639: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Apr 28 13:22:15.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9144' Apr 28 13:22:15.839: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 28 13:22:15.839: INFO: stdout: "service \"redis-slave\" force deleted\n" STEP: using delete to clean up resources Apr 28 13:22:15.839: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9144' Apr 28 13:22:15.977: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 28 13:22:15.977: INFO: stdout: "service \"redis-master\" force deleted\n" STEP: using delete to clean up resources Apr 28 13:22:15.977: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9144' Apr 28 13:22:16.087: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 28 13:22:16.087: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Apr 28 13:22:16.087: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9144' Apr 28 13:22:16.188: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 28 13:22:16.188: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Apr 28 13:22:16.188: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9144' Apr 28 13:22:16.283: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 28 13:22:16.283: INFO: stdout: "deployment.apps \"redis-master\" force deleted\n" STEP: using delete to clean up resources Apr 28 13:22:16.283: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9144' Apr 28 13:22:16.436: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 28 13:22:16.436: INFO: stdout: "deployment.apps \"redis-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 13:22:16.436: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9144" for this suite. Apr 28 13:22:56.499: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:22:56.580: INFO: namespace kubectl-9144 deletion completed in 40.13664902s • [SLOW TEST:55.408 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 13:22:56.580: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-62a49d2a-5dbe-4725-8e38-a31381025272 in namespace container-probe-4403 Apr 28 13:23:00.643: INFO: Started pod liveness-62a49d2a-5dbe-4725-8e38-a31381025272 in namespace container-probe-4403 STEP: checking the pod's current state and verifying that restartCount is present Apr 28 13:23:00.647: INFO: Initial restart count of pod liveness-62a49d2a-5dbe-4725-8e38-a31381025272 is 0 Apr 28 13:23:18.688: INFO: Restart count of pod container-probe-4403/liveness-62a49d2a-5dbe-4725-8e38-a31381025272 is now 1 (18.040632708s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 13:23:18.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4403" for this suite. Apr 28 13:23:24.747: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:23:24.821: INFO: namespace container-probe-4403 deletion completed in 6.089219708s • [SLOW TEST:28.240 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 13:23:24.821: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:179 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 13:23:24.885: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9244" for this suite. Apr 28 13:23:46.986: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:23:47.063: INFO: namespace pods-9244 deletion completed in 22.156337095s • [SLOW TEST:22.241 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 13:23:47.063: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-secret-sxjx STEP: Creating a pod to test atomic-volume-subpath Apr 28 13:23:47.164: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-sxjx" in namespace "subpath-8444" to be "success or failure" Apr 28 13:23:47.186: INFO: Pod "pod-subpath-test-secret-sxjx": Phase="Pending", Reason="", readiness=false. Elapsed: 21.656634ms Apr 28 13:23:49.190: INFO: Pod "pod-subpath-test-secret-sxjx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026104393s Apr 28 13:23:51.194: INFO: Pod "pod-subpath-test-secret-sxjx": Phase="Running", Reason="", readiness=true. Elapsed: 4.029996533s Apr 28 13:23:53.199: INFO: Pod "pod-subpath-test-secret-sxjx": Phase="Running", Reason="", readiness=true. Elapsed: 6.03447616s Apr 28 13:23:55.203: INFO: Pod "pod-subpath-test-secret-sxjx": Phase="Running", Reason="", readiness=true. Elapsed: 8.0391267s Apr 28 13:23:57.208: INFO: Pod "pod-subpath-test-secret-sxjx": Phase="Running", Reason="", readiness=true. Elapsed: 10.043302332s Apr 28 13:23:59.211: INFO: Pod "pod-subpath-test-secret-sxjx": Phase="Running", Reason="", readiness=true. Elapsed: 12.046936892s Apr 28 13:24:01.215: INFO: Pod "pod-subpath-test-secret-sxjx": Phase="Running", Reason="", readiness=true. Elapsed: 14.051167862s Apr 28 13:24:03.220: INFO: Pod "pod-subpath-test-secret-sxjx": Phase="Running", Reason="", readiness=true. Elapsed: 16.055403569s Apr 28 13:24:05.224: INFO: Pod "pod-subpath-test-secret-sxjx": Phase="Running", Reason="", readiness=true. Elapsed: 18.059809649s Apr 28 13:24:07.228: INFO: Pod "pod-subpath-test-secret-sxjx": Phase="Running", Reason="", readiness=true. Elapsed: 20.06378444s Apr 28 13:24:09.232: INFO: Pod "pod-subpath-test-secret-sxjx": Phase="Running", Reason="", readiness=true. Elapsed: 22.067779614s Apr 28 13:24:11.237: INFO: Pod "pod-subpath-test-secret-sxjx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.072225115s STEP: Saw pod success Apr 28 13:24:11.237: INFO: Pod "pod-subpath-test-secret-sxjx" satisfied condition "success or failure" Apr 28 13:24:11.241: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-secret-sxjx container test-container-subpath-secret-sxjx: STEP: delete the pod Apr 28 13:24:11.320: INFO: Waiting for pod pod-subpath-test-secret-sxjx to disappear Apr 28 13:24:11.346: INFO: Pod pod-subpath-test-secret-sxjx no longer exists STEP: Deleting pod pod-subpath-test-secret-sxjx Apr 28 13:24:11.346: INFO: Deleting pod "pod-subpath-test-secret-sxjx" in namespace "subpath-8444" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 13:24:11.349: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-8444" for this suite. Apr 28 13:24:17.364: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:24:17.439: INFO: namespace subpath-8444 deletion completed in 6.085654998s • [SLOW TEST:30.376 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 13:24:17.439: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-e149656c-78f4-4f05-b284-9ffd1fcd6213 STEP: Creating a pod to test consume configMaps Apr 28 13:24:17.572: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-cd909608-9c63-4221-837a-4aaa1d32178f" in namespace "projected-9923" to be "success or failure" Apr 28 13:24:17.576: INFO: Pod "pod-projected-configmaps-cd909608-9c63-4221-837a-4aaa1d32178f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.520437ms Apr 28 13:24:19.580: INFO: Pod "pod-projected-configmaps-cd909608-9c63-4221-837a-4aaa1d32178f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00804622s Apr 28 13:24:21.585: INFO: Pod "pod-projected-configmaps-cd909608-9c63-4221-837a-4aaa1d32178f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012067774s STEP: Saw pod success Apr 28 13:24:21.585: INFO: Pod "pod-projected-configmaps-cd909608-9c63-4221-837a-4aaa1d32178f" satisfied condition "success or failure" Apr 28 13:24:21.587: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-cd909608-9c63-4221-837a-4aaa1d32178f container projected-configmap-volume-test: STEP: delete the pod Apr 28 13:24:21.613: INFO: Waiting for pod pod-projected-configmaps-cd909608-9c63-4221-837a-4aaa1d32178f to disappear Apr 28 13:24:21.617: INFO: Pod pod-projected-configmaps-cd909608-9c63-4221-837a-4aaa1d32178f no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 13:24:21.617: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9923" for this suite. Apr 28 13:24:27.649: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:24:27.726: INFO: namespace projected-9923 deletion completed in 6.105839925s • [SLOW TEST:10.287 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 13:24:27.728: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service multi-endpoint-test in namespace services-6094 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6094 to expose endpoints map[] Apr 28 13:24:27.847: INFO: Get endpoints failed (2.706661ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Apr 28 13:24:28.851: INFO: successfully validated that service multi-endpoint-test in namespace services-6094 exposes endpoints map[] (1.006752151s elapsed) STEP: Creating pod pod1 in namespace services-6094 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6094 to expose endpoints map[pod1:[100]] Apr 28 13:24:32.936: INFO: successfully validated that service multi-endpoint-test in namespace services-6094 exposes endpoints map[pod1:[100]] (4.078346845s elapsed) STEP: Creating pod pod2 in namespace services-6094 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6094 to expose endpoints map[pod1:[100] pod2:[101]] Apr 28 13:24:35.989: INFO: successfully validated that service multi-endpoint-test in namespace services-6094 exposes endpoints map[pod1:[100] pod2:[101]] (3.048175941s elapsed) STEP: Deleting pod pod1 in namespace services-6094 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6094 to expose endpoints map[pod2:[101]] Apr 28 13:24:37.037: INFO: successfully validated that service multi-endpoint-test in namespace services-6094 exposes endpoints map[pod2:[101]] (1.038975996s elapsed) STEP: Deleting pod pod2 in namespace services-6094 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6094 to expose endpoints map[] Apr 28 13:24:38.114: INFO: successfully validated that service multi-endpoint-test in namespace services-6094 exposes endpoints map[] (1.020204234s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 13:24:38.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6094" for this suite. Apr 28 13:24:44.217: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:24:44.289: INFO: namespace services-6094 deletion completed in 6.082972527s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:16.562 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 13:24:44.290: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 28 13:24:44.372: INFO: Creating deployment "test-recreate-deployment" Apr 28 13:24:44.376: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Apr 28 13:24:44.401: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Apr 28 13:24:46.407: INFO: Waiting deployment "test-recreate-deployment" to complete Apr 28 13:24:46.410: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723677084, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723677084, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723677084, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723677084, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 28 13:24:48.414: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Apr 28 13:24:48.421: INFO: Updating deployment test-recreate-deployment Apr 28 13:24:48.421: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Apr 28 13:24:48.701: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:deployment-1417,SelfLink:/apis/apps/v1/namespaces/deployment-1417/deployments/test-recreate-deployment,UID:8b4bf50b-7120-470d-bb3e-8d5a158c86ac,ResourceVersion:7898771,Generation:2,CreationTimestamp:2020-04-28 13:24:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-04-28 13:24:48 +0000 UTC 2020-04-28 13:24:48 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-04-28 13:24:48 +0000 UTC 2020-04-28 13:24:44 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-5c8c9cc69d" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},} Apr 28 13:24:48.721: INFO: New ReplicaSet "test-recreate-deployment-5c8c9cc69d" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d,GenerateName:,Namespace:deployment-1417,SelfLink:/apis/apps/v1/namespaces/deployment-1417/replicasets/test-recreate-deployment-5c8c9cc69d,UID:f5f8b337-b628-4688-9061-68cb49b90012,ResourceVersion:7898768,Generation:1,CreationTimestamp:2020-04-28 13:24:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 8b4bf50b-7120-470d-bb3e-8d5a158c86ac 0xc002c9c427 0xc002c9c428}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Apr 28 13:24:48.721: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Apr 28 13:24:48.721: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-6df85df6b9,GenerateName:,Namespace:deployment-1417,SelfLink:/apis/apps/v1/namespaces/deployment-1417/replicasets/test-recreate-deployment-6df85df6b9,UID:62d5c3b3-4e44-415c-89af-41f61ec5dcc9,ResourceVersion:7898758,Generation:2,CreationTimestamp:2020-04-28 13:24:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 8b4bf50b-7120-470d-bb3e-8d5a158c86ac 0xc002c9c517 0xc002c9c518}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Apr 28 13:24:48.820: INFO: Pod "test-recreate-deployment-5c8c9cc69d-hq7ph" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d-hq7ph,GenerateName:test-recreate-deployment-5c8c9cc69d-,Namespace:deployment-1417,SelfLink:/api/v1/namespaces/deployment-1417/pods/test-recreate-deployment-5c8c9cc69d-hq7ph,UID:b3acc8a0-f91b-461c-8af4-4f4f27e04b8f,ResourceVersion:7898772,Generation:0,CreationTimestamp:2020-04-28 13:24:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-5c8c9cc69d f5f8b337-b628-4688-9061-68cb49b90012 0xc002c9cf37 0xc002c9cf38}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-g6ql9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-g6ql9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-g6ql9 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002c9cfd0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002c9cff0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 13:24:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 13:24:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 13:24:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 13:24:48 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-04-28 13:24:48 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 13:24:48.821: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-1417" for this suite. Apr 28 13:24:54.910: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:24:54.971: INFO: namespace deployment-1417 deletion completed in 6.14759138s • [SLOW TEST:10.681 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 13:24:54.971: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-70af3d06-de20-471b-84c7-68580625185c STEP: Creating a pod to test consume configMaps Apr 28 13:24:55.070: INFO: Waiting up to 5m0s for pod "pod-configmaps-a8454230-41fb-4cbb-8a27-da51c657c02a" in namespace "configmap-2409" to be "success or failure" Apr 28 13:24:55.090: INFO: Pod "pod-configmaps-a8454230-41fb-4cbb-8a27-da51c657c02a": Phase="Pending", Reason="", readiness=false. Elapsed: 20.696555ms Apr 28 13:24:57.119: INFO: Pod "pod-configmaps-a8454230-41fb-4cbb-8a27-da51c657c02a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049544059s Apr 28 13:24:59.124: INFO: Pod "pod-configmaps-a8454230-41fb-4cbb-8a27-da51c657c02a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.054002799s STEP: Saw pod success Apr 28 13:24:59.124: INFO: Pod "pod-configmaps-a8454230-41fb-4cbb-8a27-da51c657c02a" satisfied condition "success or failure" Apr 28 13:24:59.127: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-a8454230-41fb-4cbb-8a27-da51c657c02a container configmap-volume-test: STEP: delete the pod Apr 28 13:24:59.172: INFO: Waiting for pod pod-configmaps-a8454230-41fb-4cbb-8a27-da51c657c02a to disappear Apr 28 13:24:59.245: INFO: Pod pod-configmaps-a8454230-41fb-4cbb-8a27-da51c657c02a no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 13:24:59.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2409" for this suite. Apr 28 13:25:05.262: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:25:05.323: INFO: namespace configmap-2409 deletion completed in 6.074582546s • [SLOW TEST:10.352 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 13:25:05.323: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 28 13:25:05.423: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8c948152-5979-4538-acec-57b062dd1b46" in namespace "downward-api-7863" to be "success or failure" Apr 28 13:25:05.444: INFO: Pod "downwardapi-volume-8c948152-5979-4538-acec-57b062dd1b46": Phase="Pending", Reason="", readiness=false. Elapsed: 20.785947ms Apr 28 13:25:07.452: INFO: Pod "downwardapi-volume-8c948152-5979-4538-acec-57b062dd1b46": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028514071s Apr 28 13:25:09.456: INFO: Pod "downwardapi-volume-8c948152-5979-4538-acec-57b062dd1b46": Phase="Running", Reason="", readiness=true. Elapsed: 4.032572679s Apr 28 13:25:11.460: INFO: Pod "downwardapi-volume-8c948152-5979-4538-acec-57b062dd1b46": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.036376856s STEP: Saw pod success Apr 28 13:25:11.460: INFO: Pod "downwardapi-volume-8c948152-5979-4538-acec-57b062dd1b46" satisfied condition "success or failure" Apr 28 13:25:11.463: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-8c948152-5979-4538-acec-57b062dd1b46 container client-container: STEP: delete the pod Apr 28 13:25:11.495: INFO: Waiting for pod downwardapi-volume-8c948152-5979-4538-acec-57b062dd1b46 to disappear Apr 28 13:25:11.511: INFO: Pod downwardapi-volume-8c948152-5979-4538-acec-57b062dd1b46 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 13:25:11.511: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7863" for this suite. Apr 28 13:25:17.521: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:25:17.578: INFO: namespace downward-api-7863 deletion completed in 6.064044513s • [SLOW TEST:12.254 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 13:25:17.578: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on tmpfs Apr 28 13:25:17.694: INFO: Waiting up to 5m0s for pod "pod-b465a864-1102-4637-a866-a5d4f5fa08be" in namespace "emptydir-1423" to be "success or failure" Apr 28 13:25:17.796: INFO: Pod "pod-b465a864-1102-4637-a866-a5d4f5fa08be": Phase="Pending", Reason="", readiness=false. Elapsed: 101.462773ms Apr 28 13:25:19.800: INFO: Pod "pod-b465a864-1102-4637-a866-a5d4f5fa08be": Phase="Pending", Reason="", readiness=false. Elapsed: 2.10570375s Apr 28 13:25:21.804: INFO: Pod "pod-b465a864-1102-4637-a866-a5d4f5fa08be": Phase="Pending", Reason="", readiness=false. Elapsed: 4.109771533s Apr 28 13:25:23.808: INFO: Pod "pod-b465a864-1102-4637-a866-a5d4f5fa08be": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.113979079s STEP: Saw pod success Apr 28 13:25:23.808: INFO: Pod "pod-b465a864-1102-4637-a866-a5d4f5fa08be" satisfied condition "success or failure" Apr 28 13:25:23.811: INFO: Trying to get logs from node iruya-worker pod pod-b465a864-1102-4637-a866-a5d4f5fa08be container test-container: STEP: delete the pod Apr 28 13:25:23.846: INFO: Waiting for pod pod-b465a864-1102-4637-a866-a5d4f5fa08be to disappear Apr 28 13:25:23.874: INFO: Pod pod-b465a864-1102-4637-a866-a5d4f5fa08be no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 13:25:23.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1423" for this suite. Apr 28 13:25:29.892: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:25:29.965: INFO: namespace emptydir-1423 deletion completed in 6.087456882s • [SLOW TEST:12.387 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 13:25:29.966: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on node default medium Apr 28 13:25:30.065: INFO: Waiting up to 5m0s for pod "pod-0ee158b9-dccb-413c-a19e-783b4314b69f" in namespace "emptydir-1125" to be "success or failure" Apr 28 13:25:30.085: INFO: Pod "pod-0ee158b9-dccb-413c-a19e-783b4314b69f": Phase="Pending", Reason="", readiness=false. Elapsed: 20.265495ms Apr 28 13:25:32.089: INFO: Pod "pod-0ee158b9-dccb-413c-a19e-783b4314b69f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024364216s Apr 28 13:25:34.093: INFO: Pod "pod-0ee158b9-dccb-413c-a19e-783b4314b69f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028030913s STEP: Saw pod success Apr 28 13:25:34.093: INFO: Pod "pod-0ee158b9-dccb-413c-a19e-783b4314b69f" satisfied condition "success or failure" Apr 28 13:25:34.096: INFO: Trying to get logs from node iruya-worker2 pod pod-0ee158b9-dccb-413c-a19e-783b4314b69f container test-container: STEP: delete the pod Apr 28 13:25:34.130: INFO: Waiting for pod pod-0ee158b9-dccb-413c-a19e-783b4314b69f to disappear Apr 28 13:25:34.134: INFO: Pod pod-0ee158b9-dccb-413c-a19e-783b4314b69f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 13:25:34.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1125" for this suite. Apr 28 13:25:40.160: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:25:40.238: INFO: namespace emptydir-1125 deletion completed in 6.101446306s • [SLOW TEST:10.272 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 13:25:40.238: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-966 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a new StatefulSet Apr 28 13:25:40.364: INFO: Found 0 stateful pods, waiting for 3 Apr 28 13:25:50.369: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 28 13:25:50.369: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 28 13:25:50.369: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=false Apr 28 13:26:00.368: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 28 13:26:00.369: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 28 13:26:00.369: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Apr 28 13:26:00.396: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Apr 28 13:26:10.483: INFO: Updating stateful set ss2 Apr 28 13:26:10.515: INFO: Waiting for Pod statefulset-966/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Apr 28 13:26:20.523: INFO: Waiting for Pod statefulset-966/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Restoring Pods to the correct revision when they are deleted Apr 28 13:26:30.651: INFO: Found 2 stateful pods, waiting for 3 Apr 28 13:26:40.656: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 28 13:26:40.656: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 28 13:26:40.656: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Apr 28 13:26:40.679: INFO: Updating stateful set ss2 Apr 28 13:26:40.699: INFO: Waiting for Pod statefulset-966/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Apr 28 13:26:50.708: INFO: Waiting for Pod statefulset-966/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Apr 28 13:27:00.727: INFO: Updating stateful set ss2 Apr 28 13:27:00.773: INFO: Waiting for StatefulSet statefulset-966/ss2 to complete update Apr 28 13:27:00.773: INFO: Waiting for Pod statefulset-966/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Apr 28 13:27:10.782: INFO: Waiting for StatefulSet statefulset-966/ss2 to complete update Apr 28 13:27:10.782: INFO: Waiting for Pod statefulset-966/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Apr 28 13:27:20.785: INFO: Deleting all statefulset in ns statefulset-966 Apr 28 13:27:20.787: INFO: Scaling statefulset ss2 to 0 Apr 28 13:27:50.806: INFO: Waiting for statefulset status.replicas updated to 0 Apr 28 13:27:50.809: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 13:27:50.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-966" for this suite. Apr 28 13:27:56.841: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:27:56.908: INFO: namespace statefulset-966 deletion completed in 6.076205451s • [SLOW TEST:136.670 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 13:27:56.909: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with configMap that has name projected-configmap-test-upd-e0254a53-c4c8-4eb2-8b8c-55acd8a21019 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-e0254a53-c4c8-4eb2-8b8c-55acd8a21019 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 13:29:29.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4851" for this suite. Apr 28 13:29:49.666: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:29:49.747: INFO: namespace projected-4851 deletion completed in 20.111944925s • [SLOW TEST:112.839 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 13:29:49.748: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating Redis RC Apr 28 13:29:49.803: INFO: namespace kubectl-9537 Apr 28 13:29:49.803: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9537' Apr 28 13:29:50.106: INFO: stderr: "" Apr 28 13:29:50.106: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Apr 28 13:29:51.111: INFO: Selector matched 1 pods for map[app:redis] Apr 28 13:29:51.111: INFO: Found 0 / 1 Apr 28 13:29:52.111: INFO: Selector matched 1 pods for map[app:redis] Apr 28 13:29:52.111: INFO: Found 0 / 1 Apr 28 13:29:53.111: INFO: Selector matched 1 pods for map[app:redis] Apr 28 13:29:53.111: INFO: Found 0 / 1 Apr 28 13:29:54.111: INFO: Selector matched 1 pods for map[app:redis] Apr 28 13:29:54.111: INFO: Found 1 / 1 Apr 28 13:29:54.111: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Apr 28 13:29:54.114: INFO: Selector matched 1 pods for map[app:redis] Apr 28 13:29:54.114: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Apr 28 13:29:54.114: INFO: wait on redis-master startup in kubectl-9537 Apr 28 13:29:54.114: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-rfq9n redis-master --namespace=kubectl-9537' Apr 28 13:29:54.223: INFO: stderr: "" Apr 28 13:29:54.223: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 28 Apr 13:29:52.863 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 28 Apr 13:29:52.863 # Server started, Redis version 3.2.12\n1:M 28 Apr 13:29:52.863 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 28 Apr 13:29:52.863 * The server is now ready to accept connections on port 6379\n" STEP: exposing RC Apr 28 13:29:54.223: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-9537' Apr 28 13:29:54.370: INFO: stderr: "" Apr 28 13:29:54.370: INFO: stdout: "service/rm2 exposed\n" Apr 28 13:29:54.422: INFO: Service rm2 in namespace kubectl-9537 found. STEP: exposing service Apr 28 13:29:56.429: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-9537' Apr 28 13:29:56.567: INFO: stderr: "" Apr 28 13:29:56.567: INFO: stdout: "service/rm3 exposed\n" Apr 28 13:29:56.578: INFO: Service rm3 in namespace kubectl-9537 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 13:29:58.585: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9537" for this suite. Apr 28 13:30:22.620: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:30:22.697: INFO: namespace kubectl-9537 deletion completed in 24.1068075s • [SLOW TEST:32.948 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 13:30:22.697: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test env composition Apr 28 13:30:22.787: INFO: Waiting up to 5m0s for pod "var-expansion-bc0a26ad-cd02-4c79-91a8-b78636bcedff" in namespace "var-expansion-4626" to be "success or failure" Apr 28 13:30:22.800: INFO: Pod "var-expansion-bc0a26ad-cd02-4c79-91a8-b78636bcedff": Phase="Pending", Reason="", readiness=false. Elapsed: 13.133688ms Apr 28 13:30:24.805: INFO: Pod "var-expansion-bc0a26ad-cd02-4c79-91a8-b78636bcedff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017651392s Apr 28 13:30:26.808: INFO: Pod "var-expansion-bc0a26ad-cd02-4c79-91a8-b78636bcedff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020938576s STEP: Saw pod success Apr 28 13:30:26.808: INFO: Pod "var-expansion-bc0a26ad-cd02-4c79-91a8-b78636bcedff" satisfied condition "success or failure" Apr 28 13:30:26.811: INFO: Trying to get logs from node iruya-worker pod var-expansion-bc0a26ad-cd02-4c79-91a8-b78636bcedff container dapi-container: STEP: delete the pod Apr 28 13:30:26.827: INFO: Waiting for pod var-expansion-bc0a26ad-cd02-4c79-91a8-b78636bcedff to disappear Apr 28 13:30:26.831: INFO: Pod var-expansion-bc0a26ad-cd02-4c79-91a8-b78636bcedff no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 13:30:26.831: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-4626" for this suite. Apr 28 13:30:32.996: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:30:33.075: INFO: namespace var-expansion-4626 deletion completed in 6.241357458s • [SLOW TEST:10.378 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 13:30:33.076: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 13:30:37.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-412" for this suite. Apr 28 13:30:43.307: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:30:43.420: INFO: namespace emptydir-wrapper-412 deletion completed in 6.152496072s • [SLOW TEST:10.344 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 13:30:43.420: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Apr 28 13:30:43.468: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 13:30:48.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-1960" for this suite. Apr 28 13:30:54.745: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:30:54.822: INFO: namespace init-container-1960 deletion completed in 6.09486619s • [SLOW TEST:11.401 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 13:30:54.822: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 28 13:30:54.917: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a4a85d92-2859-4889-8a38-35f27d5bf725" in namespace "downward-api-6757" to be "success or failure" Apr 28 13:30:54.922: INFO: Pod "downwardapi-volume-a4a85d92-2859-4889-8a38-35f27d5bf725": Phase="Pending", Reason="", readiness=false. Elapsed: 4.537511ms Apr 28 13:30:56.963: INFO: Pod "downwardapi-volume-a4a85d92-2859-4889-8a38-35f27d5bf725": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045485716s Apr 28 13:30:58.967: INFO: Pod "downwardapi-volume-a4a85d92-2859-4889-8a38-35f27d5bf725": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.050036659s STEP: Saw pod success Apr 28 13:30:58.967: INFO: Pod "downwardapi-volume-a4a85d92-2859-4889-8a38-35f27d5bf725" satisfied condition "success or failure" Apr 28 13:30:58.970: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-a4a85d92-2859-4889-8a38-35f27d5bf725 container client-container: STEP: delete the pod Apr 28 13:30:58.989: INFO: Waiting for pod downwardapi-volume-a4a85d92-2859-4889-8a38-35f27d5bf725 to disappear Apr 28 13:30:58.993: INFO: Pod downwardapi-volume-a4a85d92-2859-4889-8a38-35f27d5bf725 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 13:30:58.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6757" for this suite. Apr 28 13:31:05.009: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:31:05.085: INFO: namespace downward-api-6757 deletion completed in 6.089547804s • [SLOW TEST:10.263 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 13:31:05.086: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 28 13:31:09.218: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 13:31:09.363: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-204" for this suite. Apr 28 13:31:15.421: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:31:15.510: INFO: namespace container-runtime-204 deletion completed in 6.142118373s • [SLOW TEST:10.423 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 13:31:15.510: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-6a4167f9-041a-473a-8c99-59cfa96df39e STEP: Creating secret with name s-test-opt-upd-d2278399-2268-4424-b9cf-4fa94a26cd67 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-6a4167f9-041a-473a-8c99-59cfa96df39e STEP: Updating secret s-test-opt-upd-d2278399-2268-4424-b9cf-4fa94a26cd67 STEP: Creating secret with name s-test-opt-create-c307c589-2e03-4d88-8af8-d25daf2cf6a2 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 13:31:25.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2427" for this suite. Apr 28 13:31:47.723: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:31:47.799: INFO: namespace projected-2427 deletion completed in 22.103198661s • [SLOW TEST:32.289 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 13:31:47.800: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Apr 28 13:31:51.913: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-95215927-361d-4cfc-a269-8bf5786ec4d3,GenerateName:,Namespace:events-693,SelfLink:/api/v1/namespaces/events-693/pods/send-events-95215927-361d-4cfc-a269-8bf5786ec4d3,UID:380e1ca6-6a24-4656-bc51-72b51a4010b9,ResourceVersion:7900230,Generation:0,CreationTimestamp:2020-04-28 13:31:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 877897638,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-9fkvn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9fkvn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] [] [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-9fkvn true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002823b00} {node.kubernetes.io/unreachable Exists NoExecute 0xc002823b20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 13:31:47 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 13:31:50 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 13:31:50 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 13:31:47 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.68,StartTime:2020-04-28 13:31:47 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-04-28 13:31:50 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 containerd://1a688d07f681a9768b669e28c56a4da643be074c5f6da255a12cd0d31aafbc4f}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} STEP: checking for scheduler event about the pod Apr 28 13:31:53.917: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Apr 28 13:31:55.922: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 13:31:55.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-693" for this suite. Apr 28 13:32:33.960: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:32:34.040: INFO: namespace events-693 deletion completed in 38.09825012s • [SLOW TEST:46.240 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 13:32:34.040: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 13:32:38.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-9383" for this suite. Apr 28 13:32:44.167: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:32:44.240: INFO: namespace kubelet-test-9383 deletion completed in 6.090962355s • [SLOW TEST:10.200 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 13:32:44.241: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Apr 28 13:32:48.864: INFO: Successfully updated pod "annotationupdate2e6831c3-2789-4158-b359-104a4144d4df" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 13:32:50.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1916" for this suite. Apr 28 13:33:12.925: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:33:13.001: INFO: namespace projected-1916 deletion completed in 22.093694694s • [SLOW TEST:28.761 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 13:33:13.002: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-b2edd228-a12f-43e1-aaee-4fd11b022554 STEP: Creating a pod to test consume configMaps Apr 28 13:33:13.098: INFO: Waiting up to 5m0s for pod "pod-configmaps-5806f65e-18c6-4337-bc89-cb0363cc17e4" in namespace "configmap-1091" to be "success or failure" Apr 28 13:33:13.118: INFO: Pod "pod-configmaps-5806f65e-18c6-4337-bc89-cb0363cc17e4": Phase="Pending", Reason="", readiness=false. Elapsed: 20.13032ms Apr 28 13:33:15.122: INFO: Pod "pod-configmaps-5806f65e-18c6-4337-bc89-cb0363cc17e4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024588779s Apr 28 13:33:17.127: INFO: Pod "pod-configmaps-5806f65e-18c6-4337-bc89-cb0363cc17e4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028795248s STEP: Saw pod success Apr 28 13:33:17.127: INFO: Pod "pod-configmaps-5806f65e-18c6-4337-bc89-cb0363cc17e4" satisfied condition "success or failure" Apr 28 13:33:17.130: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-5806f65e-18c6-4337-bc89-cb0363cc17e4 container configmap-volume-test: STEP: delete the pod Apr 28 13:33:17.187: INFO: Waiting for pod pod-configmaps-5806f65e-18c6-4337-bc89-cb0363cc17e4 to disappear Apr 28 13:33:17.200: INFO: Pod pod-configmaps-5806f65e-18c6-4337-bc89-cb0363cc17e4 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 13:33:17.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1091" for this suite. Apr 28 13:33:23.219: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:33:23.310: INFO: namespace configmap-1091 deletion completed in 6.105354876s • [SLOW TEST:10.308 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 13:33:23.310: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on node default medium Apr 28 13:33:23.412: INFO: Waiting up to 5m0s for pod "pod-bf2265e0-d150-41fb-8a7f-09753e6d6462" in namespace "emptydir-3515" to be "success or failure" Apr 28 13:33:23.416: INFO: Pod "pod-bf2265e0-d150-41fb-8a7f-09753e6d6462": Phase="Pending", Reason="", readiness=false. Elapsed: 4.173357ms Apr 28 13:33:25.421: INFO: Pod "pod-bf2265e0-d150-41fb-8a7f-09753e6d6462": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008664022s Apr 28 13:33:27.424: INFO: Pod "pod-bf2265e0-d150-41fb-8a7f-09753e6d6462": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012308252s STEP: Saw pod success Apr 28 13:33:27.425: INFO: Pod "pod-bf2265e0-d150-41fb-8a7f-09753e6d6462" satisfied condition "success or failure" Apr 28 13:33:27.427: INFO: Trying to get logs from node iruya-worker pod pod-bf2265e0-d150-41fb-8a7f-09753e6d6462 container test-container: STEP: delete the pod Apr 28 13:33:27.448: INFO: Waiting for pod pod-bf2265e0-d150-41fb-8a7f-09753e6d6462 to disappear Apr 28 13:33:27.452: INFO: Pod pod-bf2265e0-d150-41fb-8a7f-09753e6d6462 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 13:33:27.452: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3515" for this suite. Apr 28 13:33:33.482: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:33:33.564: INFO: namespace emptydir-3515 deletion completed in 6.109273887s • [SLOW TEST:10.254 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 13:33:33.565: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1456 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Apr 28 13:33:33.616: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-9723' Apr 28 13:33:36.283: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Apr 28 13:33:36.284: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created STEP: confirm that you can get logs from an rc Apr 28 13:33:36.297: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-rltc8] Apr 28 13:33:36.297: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-rltc8" in namespace "kubectl-9723" to be "running and ready" Apr 28 13:33:36.302: INFO: Pod "e2e-test-nginx-rc-rltc8": Phase="Pending", Reason="", readiness=false. Elapsed: 5.553629ms Apr 28 13:33:38.309: INFO: Pod "e2e-test-nginx-rc-rltc8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01171394s Apr 28 13:33:40.313: INFO: Pod "e2e-test-nginx-rc-rltc8": Phase="Running", Reason="", readiness=true. Elapsed: 4.016184442s Apr 28 13:33:40.313: INFO: Pod "e2e-test-nginx-rc-rltc8" satisfied condition "running and ready" Apr 28 13:33:40.313: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-rltc8] Apr 28 13:33:40.313: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=kubectl-9723' Apr 28 13:33:40.432: INFO: stderr: "" Apr 28 13:33:40.432: INFO: stdout: "" [AfterEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1461 Apr 28 13:33:40.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-9723' Apr 28 13:33:40.550: INFO: stderr: "" Apr 28 13:33:40.550: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 13:33:40.550: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9723" for this suite. Apr 28 13:34:02.567: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:34:02.641: INFO: namespace kubectl-9723 deletion completed in 22.087694331s • [SLOW TEST:29.076 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 13:34:02.642: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 13:34:02.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-3440" for this suite. Apr 28 13:34:08.968: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:34:09.055: INFO: namespace kubelet-test-3440 deletion completed in 6.160401391s • [SLOW TEST:6.413 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 13:34:09.056: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Apr 28 13:34:14.181: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 13:34:15.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-3037" for this suite. Apr 28 13:34:37.250: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:34:37.333: INFO: namespace replicaset-3037 deletion completed in 22.110872825s • [SLOW TEST:28.278 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 13:34:37.334: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-603e45fc-a06a-4441-98b5-16ef64460a09 in namespace container-probe-3979 Apr 28 13:34:41.533: INFO: Started pod busybox-603e45fc-a06a-4441-98b5-16ef64460a09 in namespace container-probe-3979 STEP: checking the pod's current state and verifying that restartCount is present Apr 28 13:34:41.536: INFO: Initial restart count of pod busybox-603e45fc-a06a-4441-98b5-16ef64460a09 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 13:38:42.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3979" for this suite. Apr 28 13:38:48.368: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:38:48.505: INFO: namespace container-probe-3979 deletion completed in 6.169055385s • [SLOW TEST:251.171 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 13:38:48.506: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-ddc862fc-6b3b-4846-8a3a-14567c83724c STEP: Creating a pod to test consume secrets Apr 28 13:38:48.609: INFO: Waiting up to 5m0s for pod "pod-secrets-93159d7c-acd0-4b37-be3b-9fa25a696b36" in namespace "secrets-1299" to be "success or failure" Apr 28 13:38:48.622: INFO: Pod "pod-secrets-93159d7c-acd0-4b37-be3b-9fa25a696b36": Phase="Pending", Reason="", readiness=false. Elapsed: 13.428968ms Apr 28 13:38:50.626: INFO: Pod "pod-secrets-93159d7c-acd0-4b37-be3b-9fa25a696b36": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017399167s Apr 28 13:38:52.630: INFO: Pod "pod-secrets-93159d7c-acd0-4b37-be3b-9fa25a696b36": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021133821s STEP: Saw pod success Apr 28 13:38:52.630: INFO: Pod "pod-secrets-93159d7c-acd0-4b37-be3b-9fa25a696b36" satisfied condition "success or failure" Apr 28 13:38:52.633: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-93159d7c-acd0-4b37-be3b-9fa25a696b36 container secret-env-test: STEP: delete the pod Apr 28 13:38:52.654: INFO: Waiting for pod pod-secrets-93159d7c-acd0-4b37-be3b-9fa25a696b36 to disappear Apr 28 13:38:52.678: INFO: Pod pod-secrets-93159d7c-acd0-4b37-be3b-9fa25a696b36 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 13:38:52.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1299" for this suite. Apr 28 13:38:58.734: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:38:58.831: INFO: namespace secrets-1299 deletion completed in 6.149570852s • [SLOW TEST:10.325 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 13:38:58.831: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test hostPath mode Apr 28 13:38:58.903: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-5095" to be "success or failure" Apr 28 13:38:58.945: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 41.897428ms Apr 28 13:39:00.949: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04561349s Apr 28 13:39:02.953: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04967084s Apr 28 13:39:04.957: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.053572461s STEP: Saw pod success Apr 28 13:39:04.957: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Apr 28 13:39:04.959: INFO: Trying to get logs from node iruya-worker2 pod pod-host-path-test container test-container-1: STEP: delete the pod Apr 28 13:39:05.016: INFO: Waiting for pod pod-host-path-test to disappear Apr 28 13:39:05.028: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 13:39:05.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-5095" for this suite. Apr 28 13:39:11.043: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:39:11.120: INFO: namespace hostpath-5095 deletion completed in 6.089828911s • [SLOW TEST:12.289 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 13:39:11.122: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-upd-c2a5631f-f84e-4895-adfc-1bf42546414a STEP: Creating the pod STEP: Updating configmap configmap-test-upd-c2a5631f-f84e-4895-adfc-1bf42546414a STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 13:39:19.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7414" for this suite. Apr 28 13:39:41.309: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:39:41.393: INFO: namespace configmap-7414 deletion completed in 22.103970723s • [SLOW TEST:30.271 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 13:39:41.394: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-vqddk in namespace proxy-1670 I0428 13:39:41.503463 6 runners.go:180] Created replication controller with name: proxy-service-vqddk, namespace: proxy-1670, replica count: 1 I0428 13:39:42.553952 6 runners.go:180] proxy-service-vqddk Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0428 13:39:43.554202 6 runners.go:180] proxy-service-vqddk Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0428 13:39:44.554483 6 runners.go:180] proxy-service-vqddk Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0428 13:39:45.554721 6 runners.go:180] proxy-service-vqddk Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0428 13:39:46.554918 6 runners.go:180] proxy-service-vqddk Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 28 13:39:46.558: INFO: setup took 5.125120573s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Apr 28 13:39:46.565: INFO: (0) /api/v1/namespaces/proxy-1670/pods/proxy-service-vqddk-29w94:160/proxy/: foo (200; 6.874003ms) Apr 28 13:39:46.565: INFO: (0) /api/v1/namespaces/proxy-1670/pods/http:proxy-service-vqddk-29w94:162/proxy/: bar (200; 6.974603ms) Apr 28 13:39:46.565: INFO: (0) /api/v1/namespaces/proxy-1670/pods/proxy-service-vqddk-29w94/proxy/: test (200; 6.812554ms) Apr 28 13:39:46.566: INFO: (0) /api/v1/namespaces/proxy-1670/pods/http:proxy-service-vqddk-29w94:1080/proxy/: ... (200; 7.280947ms) Apr 28 13:39:46.566: INFO: (0) /api/v1/namespaces/proxy-1670/pods/proxy-service-vqddk-29w94:1080/proxy/: test<... (200; 7.20095ms) Apr 28 13:39:46.566: INFO: (0) /api/v1/namespaces/proxy-1670/pods/http:proxy-service-vqddk-29w94:160/proxy/: foo (200; 7.218892ms) Apr 28 13:39:46.570: INFO: (0) /api/v1/namespaces/proxy-1670/services/http:proxy-service-vqddk:portname1/proxy/: foo (200; 12.004244ms) Apr 28 13:39:46.571: INFO: (0) /api/v1/namespaces/proxy-1670/services/proxy-service-vqddk:portname1/proxy/: foo (200; 12.211946ms) Apr 28 13:39:46.571: INFO: (0) /api/v1/namespaces/proxy-1670/pods/proxy-service-vqddk-29w94:162/proxy/: bar (200; 12.227525ms) Apr 28 13:39:46.573: INFO: (0) /api/v1/namespaces/proxy-1670/services/proxy-service-vqddk:portname2/proxy/: bar (200; 14.875251ms) Apr 28 13:39:46.573: INFO: (0) /api/v1/namespaces/proxy-1670/services/http:proxy-service-vqddk:portname2/proxy/: bar (200; 14.972839ms) Apr 28 13:39:46.576: INFO: (0) /api/v1/namespaces/proxy-1670/pods/https:proxy-service-vqddk-29w94:462/proxy/: tls qux (200; 18.102635ms) Apr 28 13:39:46.577: INFO: (0) /api/v1/namespaces/proxy-1670/services/https:proxy-service-vqddk:tlsportname2/proxy/: tls qux (200; 18.413357ms) Apr 28 13:39:46.577: INFO: (0) /api/v1/namespaces/proxy-1670/pods/https:proxy-service-vqddk-29w94:460/proxy/: tls baz (200; 18.275429ms) Apr 28 13:39:46.578: INFO: (0) /api/v1/namespaces/proxy-1670/pods/https:proxy-service-vqddk-29w94:443/proxy/: test<... (200; 20.687783ms) Apr 28 13:39:46.600: INFO: (1) /api/v1/namespaces/proxy-1670/pods/proxy-service-vqddk-29w94:160/proxy/: foo (200; 21.371945ms) Apr 28 13:39:46.600: INFO: (1) /api/v1/namespaces/proxy-1670/pods/proxy-service-vqddk-29w94:162/proxy/: bar (200; 21.434859ms) Apr 28 13:39:46.600: INFO: (1) /api/v1/namespaces/proxy-1670/pods/https:proxy-service-vqddk-29w94:443/proxy/: ... (200; 23.742071ms) Apr 28 13:39:46.602: INFO: (1) /api/v1/namespaces/proxy-1670/services/http:proxy-service-vqddk:portname2/proxy/: bar (200; 23.724669ms) Apr 28 13:39:46.602: INFO: (1) /api/v1/namespaces/proxy-1670/services/https:proxy-service-vqddk:tlsportname1/proxy/: tls baz (200; 23.853234ms) Apr 28 13:39:46.602: INFO: (1) /api/v1/namespaces/proxy-1670/pods/proxy-service-vqddk-29w94/proxy/: test (200; 23.724591ms) Apr 28 13:39:46.603: INFO: (1) /api/v1/namespaces/proxy-1670/services/https:proxy-service-vqddk:tlsportname2/proxy/: tls qux (200; 24.087384ms) Apr 28 13:39:46.606: INFO: (2) /api/v1/namespaces/proxy-1670/pods/https:proxy-service-vqddk-29w94:443/proxy/: test<... (200; 5.130573ms) Apr 28 13:39:46.609: INFO: (2) /api/v1/namespaces/proxy-1670/pods/https:proxy-service-vqddk-29w94:460/proxy/: tls baz (200; 6.114962ms) Apr 28 13:39:46.609: INFO: (2) /api/v1/namespaces/proxy-1670/pods/http:proxy-service-vqddk-29w94:162/proxy/: bar (200; 6.05371ms) Apr 28 13:39:46.609: INFO: (2) /api/v1/namespaces/proxy-1670/pods/proxy-service-vqddk-29w94:160/proxy/: foo (200; 6.098669ms) Apr 28 13:39:46.609: INFO: (2) /api/v1/namespaces/proxy-1670/pods/http:proxy-service-vqddk-29w94:160/proxy/: foo (200; 6.052953ms) Apr 28 13:39:46.609: INFO: (2) /api/v1/namespaces/proxy-1670/pods/proxy-service-vqddk-29w94/proxy/: test (200; 6.619908ms) Apr 28 13:39:46.609: INFO: (2) /api/v1/namespaces/proxy-1670/pods/proxy-service-vqddk-29w94:162/proxy/: bar (200; 6.509977ms) Apr 28 13:39:46.609: INFO: (2) /api/v1/namespaces/proxy-1670/pods/http:proxy-service-vqddk-29w94:1080/proxy/: ... (200; 6.661153ms) Apr 28 13:39:46.609: INFO: (2) /api/v1/namespaces/proxy-1670/services/proxy-service-vqddk:portname1/proxy/: foo (200; 6.772493ms) Apr 28 13:39:46.610: INFO: (2) /api/v1/namespaces/proxy-1670/pods/https:proxy-service-vqddk-29w94:462/proxy/: tls qux (200; 6.726361ms) Apr 28 13:39:46.610: INFO: (2) /api/v1/namespaces/proxy-1670/services/proxy-service-vqddk:portname2/proxy/: bar (200; 7.092898ms) Apr 28 13:39:46.610: INFO: (2) /api/v1/namespaces/proxy-1670/services/http:proxy-service-vqddk:portname1/proxy/: foo (200; 6.962511ms) Apr 28 13:39:46.610: INFO: (2) /api/v1/namespaces/proxy-1670/services/http:proxy-service-vqddk:portname2/proxy/: bar (200; 7.01074ms) Apr 28 13:39:46.610: INFO: (2) /api/v1/namespaces/proxy-1670/services/https:proxy-service-vqddk:tlsportname2/proxy/: tls qux (200; 7.090419ms) Apr 28 13:39:46.610: INFO: (2) /api/v1/namespaces/proxy-1670/services/https:proxy-service-vqddk:tlsportname1/proxy/: tls baz (200; 7.38013ms) Apr 28 13:39:46.613: INFO: (3) /api/v1/namespaces/proxy-1670/pods/proxy-service-vqddk-29w94:162/proxy/: bar (200; 2.258769ms) Apr 28 13:39:46.613: INFO: (3) /api/v1/namespaces/proxy-1670/pods/proxy-service-vqddk-29w94:160/proxy/: foo (200; 2.746074ms) Apr 28 13:39:46.615: INFO: (3) /api/v1/namespaces/proxy-1670/pods/http:proxy-service-vqddk-29w94:1080/proxy/: ... (200; 4.095217ms) Apr 28 13:39:46.615: INFO: (3) /api/v1/namespaces/proxy-1670/pods/https:proxy-service-vqddk-29w94:443/proxy/: test (200; 4.513845ms) Apr 28 13:39:46.615: INFO: (3) /api/v1/namespaces/proxy-1670/pods/http:proxy-service-vqddk-29w94:160/proxy/: foo (200; 4.514742ms) Apr 28 13:39:46.615: INFO: (3) /api/v1/namespaces/proxy-1670/pods/https:proxy-service-vqddk-29w94:460/proxy/: tls baz (200; 5.091669ms) Apr 28 13:39:46.616: INFO: (3) /api/v1/namespaces/proxy-1670/services/http:proxy-service-vqddk:portname1/proxy/: foo (200; 5.373038ms) Apr 28 13:39:46.616: INFO: (3) /api/v1/namespaces/proxy-1670/services/https:proxy-service-vqddk:tlsportname1/proxy/: tls baz (200; 5.373584ms) Apr 28 13:39:46.616: INFO: (3) /api/v1/namespaces/proxy-1670/services/proxy-service-vqddk:portname2/proxy/: bar (200; 5.309577ms) Apr 28 13:39:46.616: INFO: (3) /api/v1/namespaces/proxy-1670/pods/http:proxy-service-vqddk-29w94:162/proxy/: bar (200; 5.33065ms) Apr 28 13:39:46.616: INFO: (3) /api/v1/namespaces/proxy-1670/pods/proxy-service-vqddk-29w94:1080/proxy/: test<... (200; 5.395058ms) Apr 28 13:39:46.616: INFO: (3) /api/v1/namespaces/proxy-1670/services/proxy-service-vqddk:portname1/proxy/: foo (200; 5.343159ms) Apr 28 13:39:46.616: INFO: (3) /api/v1/namespaces/proxy-1670/services/https:proxy-service-vqddk:tlsportname2/proxy/: tls qux (200; 5.515969ms) Apr 28 13:39:46.616: INFO: (3) /api/v1/namespaces/proxy-1670/services/http:proxy-service-vqddk:portname2/proxy/: bar (200; 5.438773ms) Apr 28 13:39:46.620: INFO: (4) /api/v1/namespaces/proxy-1670/pods/proxy-service-vqddk-29w94:1080/proxy/: test<... (200; 3.859102ms) Apr 28 13:39:46.620: INFO: (4) /api/v1/namespaces/proxy-1670/pods/http:proxy-service-vqddk-29w94:1080/proxy/: ... (200; 3.878907ms) Apr 28 13:39:46.620: INFO: (4) /api/v1/namespaces/proxy-1670/pods/http:proxy-service-vqddk-29w94:160/proxy/: foo (200; 3.948639ms) Apr 28 13:39:46.620: INFO: (4) /api/v1/namespaces/proxy-1670/pods/proxy-service-vqddk-29w94:162/proxy/: bar (200; 3.890697ms) Apr 28 13:39:46.620: INFO: (4) /api/v1/namespaces/proxy-1670/pods/https:proxy-service-vqddk-29w94:460/proxy/: tls baz (200; 3.960514ms) Apr 28 13:39:46.620: INFO: (4) /api/v1/namespaces/proxy-1670/pods/https:proxy-service-vqddk-29w94:462/proxy/: tls qux (200; 3.995533ms) Apr 28 13:39:46.620: INFO: (4) /api/v1/namespaces/proxy-1670/pods/http:proxy-service-vqddk-29w94:162/proxy/: bar (200; 4.024503ms) Apr 28 13:39:46.620: INFO: (4) /api/v1/namespaces/proxy-1670/pods/proxy-service-vqddk-29w94/proxy/: test (200; 4.162064ms) Apr 28 13:39:46.621: INFO: (4) /api/v1/namespaces/proxy-1670/services/https:proxy-service-vqddk:tlsportname1/proxy/: tls baz (200; 4.674823ms) Apr 28 13:39:46.621: INFO: (4) /api/v1/namespaces/proxy-1670/services/http:proxy-service-vqddk:portname1/proxy/: foo (200; 4.734976ms) Apr 28 13:39:46.621: INFO: (4) /api/v1/namespaces/proxy-1670/services/http:proxy-service-vqddk:portname2/proxy/: bar (200; 5.154927ms) Apr 28 13:39:46.621: INFO: (4) /api/v1/namespaces/proxy-1670/pods/https:proxy-service-vqddk-29w94:443/proxy/: test (200; 4.335263ms) Apr 28 13:39:46.630: INFO: (5) /api/v1/namespaces/proxy-1670/pods/proxy-service-vqddk-29w94:162/proxy/: bar (200; 7.789027ms) Apr 28 13:39:46.640: INFO: (5) /api/v1/namespaces/proxy-1670/pods/http:proxy-service-vqddk-29w94:160/proxy/: foo (200; 18.004605ms) Apr 28 13:39:46.640: INFO: (5) /api/v1/namespaces/proxy-1670/pods/proxy-service-vqddk-29w94:160/proxy/: foo (200; 18.227833ms) Apr 28 13:39:46.640: INFO: (5) /api/v1/namespaces/proxy-1670/services/proxy-service-vqddk:portname2/proxy/: bar (200; 18.294774ms) Apr 28 13:39:46.640: INFO: (5) /api/v1/namespaces/proxy-1670/pods/proxy-service-vqddk-29w94:1080/proxy/: test<... (200; 18.292185ms) Apr 28 13:39:46.641: INFO: (5) /api/v1/namespaces/proxy-1670/pods/http:proxy-service-vqddk-29w94:162/proxy/: bar (200; 18.81053ms) Apr 28 13:39:46.641: INFO: (5) /api/v1/namespaces/proxy-1670/pods/http:proxy-service-vqddk-29w94:1080/proxy/: ... (200; 18.902234ms) Apr 28 13:39:46.642: INFO: (5) /api/v1/namespaces/proxy-1670/services/http:proxy-service-vqddk:portname2/proxy/: bar (200; 19.733815ms) Apr 28 13:39:46.642: INFO: (5) /api/v1/namespaces/proxy-1670/services/http:proxy-service-vqddk:portname1/proxy/: foo (200; 19.753936ms) Apr 28 13:39:46.642: INFO: (5) /api/v1/namespaces/proxy-1670/services/proxy-service-vqddk:portname1/proxy/: foo (200; 19.742542ms) Apr 28 13:39:46.643: INFO: (5) /api/v1/namespaces/proxy-1670/pods/https:proxy-service-vqddk-29w94:462/proxy/: tls qux (200; 20.460682ms) Apr 28 13:39:46.643: INFO: (5) /api/v1/namespaces/proxy-1670/services/https:proxy-service-vqddk:tlsportname1/proxy/: tls baz (200; 20.50763ms) Apr 28 13:39:46.643: INFO: (5) /api/v1/namespaces/proxy-1670/services/https:proxy-service-vqddk:tlsportname2/proxy/: tls qux (200; 20.469025ms) Apr 28 13:39:46.643: INFO: (5) /api/v1/namespaces/proxy-1670/pods/https:proxy-service-vqddk-29w94:460/proxy/: tls baz (200; 20.514543ms) Apr 28 13:39:46.643: INFO: (5) /api/v1/namespaces/proxy-1670/pods/https:proxy-service-vqddk-29w94:443/proxy/: ... (200; 2.707349ms) Apr 28 13:39:46.646: INFO: (6) /api/v1/namespaces/proxy-1670/pods/https:proxy-service-vqddk-29w94:462/proxy/: tls qux (200; 2.870786ms) Apr 28 13:39:46.648: INFO: (6) /api/v1/namespaces/proxy-1670/pods/proxy-service-vqddk-29w94:1080/proxy/: test<... (200; 4.878166ms) Apr 28 13:39:46.648: INFO: (6) /api/v1/namespaces/proxy-1670/pods/http:proxy-service-vqddk-29w94:162/proxy/: bar (200; 4.933019ms) Apr 28 13:39:46.648: INFO: (6) /api/v1/namespaces/proxy-1670/pods/proxy-service-vqddk-29w94:162/proxy/: bar (200; 4.85803ms) Apr 28 13:39:46.648: INFO: (6) /api/v1/namespaces/proxy-1670/pods/proxy-service-vqddk-29w94:160/proxy/: foo (200; 4.963279ms) Apr 28 13:39:46.648: INFO: (6) /api/v1/namespaces/proxy-1670/pods/https:proxy-service-vqddk-29w94:460/proxy/: tls baz (200; 4.949884ms) Apr 28 13:39:46.649: INFO: (6) /api/v1/namespaces/proxy-1670/pods/proxy-service-vqddk-29w94/proxy/: test (200; 5.516563ms) Apr 28 13:39:46.649: INFO: (6) /api/v1/namespaces/proxy-1670/pods/https:proxy-service-vqddk-29w94:443/proxy/: test<... (200; 4.969052ms) Apr 28 13:39:46.655: INFO: (7) /api/v1/namespaces/proxy-1670/pods/proxy-service-vqddk-29w94:162/proxy/: bar (200; 4.757221ms) Apr 28 13:39:46.655: INFO: (7) /api/v1/namespaces/proxy-1670/pods/proxy-service-vqddk-29w94/proxy/: test (200; 3.873685ms) Apr 28 13:39:46.656: INFO: (7) /api/v1/namespaces/proxy-1670/pods/https:proxy-service-vqddk-29w94:443/proxy/: ... (200; 4.559808ms) Apr 28 13:39:46.656: INFO: (7) /api/v1/namespaces/proxy-1670/pods/https:proxy-service-vqddk-29w94:462/proxy/: tls qux (200; 5.682441ms) Apr 28 13:39:46.656: INFO: (7) /api/v1/namespaces/proxy-1670/pods/http:proxy-service-vqddk-29w94:162/proxy/: bar (200; 5.828049ms) Apr 28 13:39:46.657: INFO: (7) /api/v1/namespaces/proxy-1670/services/http:proxy-service-vqddk:portname2/proxy/: bar (200; 6.263915ms) Apr 28 13:39:46.657: INFO: (7) /api/v1/namespaces/proxy-1670/services/proxy-service-vqddk:portname2/proxy/: bar (200; 5.711029ms) Apr 28 13:39:46.657: INFO: (7) /api/v1/namespaces/proxy-1670/services/http:proxy-service-vqddk:portname1/proxy/: foo (200; 6.080396ms) Apr 28 13:39:46.657: INFO: (7) /api/v1/namespaces/proxy-1670/services/https:proxy-service-vqddk:tlsportname1/proxy/: tls baz (200; 5.815954ms) Apr 28 13:39:46.664: INFO: (8) /api/v1/namespaces/proxy-1670/pods/proxy-service-vqddk-29w94/proxy/: test (200; 6.370716ms) Apr 28 13:39:46.664: INFO: (8) /api/v1/namespaces/proxy-1670/pods/https:proxy-service-vqddk-29w94:460/proxy/: tls baz (200; 6.616113ms) Apr 28 13:39:46.664: INFO: (8) /api/v1/namespaces/proxy-1670/pods/proxy-service-vqddk-29w94:1080/proxy/: test<... (200; 6.57516ms) Apr 28 13:39:46.664: INFO: (8) /api/v1/namespaces/proxy-1670/pods/http:proxy-service-vqddk-29w94:160/proxy/: foo (200; 6.725753ms) Apr 28 13:39:46.664: INFO: (8) /api/v1/namespaces/proxy-1670/pods/proxy-service-vqddk-29w94:160/proxy/: foo (200; 6.854911ms) Apr 28 13:39:46.664: INFO: (8) /api/v1/namespaces/proxy-1670/pods/proxy-service-vqddk-29w94:162/proxy/: bar (200; 7.179968ms) Apr 28 13:39:46.665: INFO: (8) /api/v1/namespaces/proxy-1670/pods/https:proxy-service-vqddk-29w94:443/proxy/: ... (200; 7.223599ms) Apr 28 13:39:46.665: INFO: (8) /api/v1/namespaces/proxy-1670/pods/http:proxy-service-vqddk-29w94:162/proxy/: bar (200; 7.457414ms) Apr 28 13:39:46.665: INFO: (8) /api/v1/namespaces/proxy-1670/services/https:proxy-service-vqddk:tlsportname1/proxy/: tls baz (200; 7.369564ms) Apr 28 13:39:46.665: INFO: (8) /api/v1/namespaces/proxy-1670/services/http:proxy-service-vqddk:portname2/proxy/: bar (200; 7.608576ms) Apr 28 13:39:46.665: INFO: (8) /api/v1/namespaces/proxy-1670/services/https:proxy-service-vqddk:tlsportname2/proxy/: tls qux (200; 7.431144ms) Apr 28 13:39:46.665: INFO: (8) /api/v1/namespaces/proxy-1670/services/http:proxy-service-vqddk:portname1/proxy/: foo (200; 7.603232ms) Apr 28 13:39:46.665: INFO: (8) /api/v1/namespaces/proxy-1670/services/proxy-service-vqddk:portname2/proxy/: bar (200; 7.672756ms) Apr 28 13:39:46.669: INFO: (9) /api/v1/namespaces/proxy-1670/pods/proxy-service-vqddk-29w94/proxy/: test (200; 3.858271ms) Apr 28 13:39:46.670: INFO: (9) /api/v1/namespaces/proxy-1670/pods/https:proxy-service-vqddk-29w94:460/proxy/: tls baz (200; 4.667796ms) Apr 28 13:39:46.670: INFO: (9) /api/v1/namespaces/proxy-1670/services/proxy-service-vqddk:portname1/proxy/: foo (200; 4.775325ms) Apr 28 13:39:46.670: INFO: (9) /api/v1/namespaces/proxy-1670/pods/proxy-service-vqddk-29w94:162/proxy/: bar (200; 4.815011ms) Apr 28 13:39:46.670: INFO: (9) /api/v1/namespaces/proxy-1670/pods/http:proxy-service-vqddk-29w94:1080/proxy/: ... (200; 4.851298ms) Apr 28 13:39:46.670: INFO: (9) /api/v1/namespaces/proxy-1670/services/https:proxy-service-vqddk:tlsportname2/proxy/: tls qux (200; 4.963834ms) Apr 28 13:39:46.670: INFO: (9) /api/v1/namespaces/proxy-1670/pods/https:proxy-service-vqddk-29w94:443/proxy/: test<... (200; 5.231713ms) Apr 28 13:39:46.670: INFO: (9) /api/v1/namespaces/proxy-1670/services/http:proxy-service-vqddk:portname1/proxy/: foo (200; 5.251701ms) Apr 28 13:39:46.671: INFO: (9) /api/v1/namespaces/proxy-1670/pods/http:proxy-service-vqddk-29w94:162/proxy/: bar (200; 5.478876ms) Apr 28 13:39:46.671: INFO: (9) /api/v1/namespaces/proxy-1670/pods/http:proxy-service-vqddk-29w94:160/proxy/: foo (200; 5.486447ms) Apr 28 13:39:46.671: INFO: (9) /api/v1/namespaces/proxy-1670/services/proxy-service-vqddk:portname2/proxy/: bar (200; 5.576309ms) Apr 28 13:39:46.671: INFO: (9) /api/v1/namespaces/proxy-1670/pods/proxy-service-vqddk-29w94:160/proxy/: foo (200; 5.579028ms) Apr 28 13:39:46.671: INFO: (9) /api/v1/namespaces/proxy-1670/pods/https:proxy-service-vqddk-29w94:462/proxy/: tls qux (200; 5.651664ms) Apr 28 13:39:46.675: INFO: (10) /api/v1/namespaces/proxy-1670/pods/http:proxy-service-vqddk-29w94:1080/proxy/: ... (200; 3.577994ms) Apr 28 13:39:46.675: INFO: (10) /api/v1/namespaces/proxy-1670/pods/proxy-service-vqddk-29w94:162/proxy/: bar (200; 3.579101ms) Apr 28 13:39:46.675: INFO: (10) /api/v1/namespaces/proxy-1670/pods/https:proxy-service-vqddk-29w94:462/proxy/: tls qux (200; 3.742091ms) Apr 28 13:39:46.675: INFO: (10) /api/v1/namespaces/proxy-1670/pods/https:proxy-service-vqddk-29w94:460/proxy/: tls baz (200; 3.799122ms) Apr 28 13:39:46.675: INFO: (10) /api/v1/namespaces/proxy-1670/pods/proxy-service-vqddk-29w94/proxy/: test (200; 4.080917ms) Apr 28 13:39:46.676: INFO: (10) /api/v1/namespaces/proxy-1670/pods/http:proxy-service-vqddk-29w94:162/proxy/: bar (200; 4.798974ms) Apr 28 13:39:46.676: INFO: (10) /api/v1/namespaces/proxy-1670/services/http:proxy-service-vqddk:portname1/proxy/: foo (200; 4.755826ms) Apr 28 13:39:46.676: INFO: (10) /api/v1/namespaces/proxy-1670/pods/http:proxy-service-vqddk-29w94:160/proxy/: foo (200; 5.257035ms) Apr 28 13:39:46.676: INFO: (10) /api/v1/namespaces/proxy-1670/pods/proxy-service-vqddk-29w94:1080/proxy/: test<... (200; 5.2564ms) Apr 28 13:39:46.676: INFO: (10) /api/v1/namespaces/proxy-1670/pods/proxy-service-vqddk-29w94:160/proxy/: foo (200; 5.542565ms) Apr 28 13:39:46.677: INFO: (10) /api/v1/namespaces/proxy-1670/services/https:proxy-service-vqddk:tlsportname2/proxy/: tls qux (200; 5.992415ms) Apr 28 13:39:46.677: INFO: (10) /api/v1/namespaces/proxy-1670/services/proxy-service-vqddk:portname2/proxy/: bar (200; 5.968795ms) Apr 28 13:39:46.677: INFO: (10) /api/v1/namespaces/proxy-1670/services/https:proxy-service-vqddk:tlsportname1/proxy/: tls baz (200; 6.208058ms) Apr 28 13:39:46.677: INFO: (10) /api/v1/namespaces/proxy-1670/services/proxy-service-vqddk:portname1/proxy/: foo (200; 6.101436ms) Apr 28 13:39:46.677: INFO: (10) /api/v1/namespaces/proxy-1670/pods/https:proxy-service-vqddk-29w94:443/proxy/: test<... (200; 1.704319ms) Apr 28 13:39:46.681: INFO: (11) /api/v1/namespaces/proxy-1670/pods/proxy-service-vqddk-29w94:160/proxy/: foo (200; 3.582506ms) Apr 28 13:39:46.681: INFO: (11) /api/v1/namespaces/proxy-1670/pods/http:proxy-service-vqddk-29w94:162/proxy/: bar (200; 3.61667ms) Apr 28 13:39:46.681: INFO: (11) /api/v1/namespaces/proxy-1670/pods/proxy-service-vqddk-29w94/proxy/: test (200; 3.826876ms) Apr 28 13:39:46.681: INFO: (11) /api/v1/namespaces/proxy-1670/pods/https:proxy-service-vqddk-29w94:443/proxy/: ... (200; 3.930544ms) Apr 28 13:39:46.682: INFO: (11) /api/v1/namespaces/proxy-1670/services/proxy-service-vqddk:portname2/proxy/: bar (200; 4.814003ms) Apr 28 13:39:46.682: INFO: (11) /api/v1/namespaces/proxy-1670/services/http:proxy-service-vqddk:portname2/proxy/: bar (200; 5.07777ms) Apr 28 13:39:46.682: INFO: (11) /api/v1/namespaces/proxy-1670/services/https:proxy-service-vqddk:tlsportname2/proxy/: tls qux (200; 5.206962ms) Apr 28 13:39:46.682: INFO: (11) /api/v1/namespaces/proxy-1670/services/http:proxy-service-vqddk:portname1/proxy/: foo (200; 5.061649ms) Apr 28 13:39:46.682: INFO: (11) /api/v1/namespaces/proxy-1670/services/proxy-service-vqddk:portname1/proxy/: foo (200; 5.023705ms) Apr 28 13:39:46.683: INFO: (11) /api/v1/namespaces/proxy-1670/services/https:proxy-service-vqddk:tlsportname1/proxy/: tls baz (200; 5.995818ms) Apr 28 13:39:46.686: INFO: (12) /api/v1/namespaces/proxy-1670/pods/https:proxy-service-vqddk-29w94:443/proxy/: ... (200; 3.51216ms) Apr 28 13:39:46.687: INFO: (12) /api/v1/namespaces/proxy-1670/pods/proxy-service-vqddk-29w94:162/proxy/: bar (200; 3.563963ms) Apr 28 13:39:46.687: INFO: (12) /api/v1/namespaces/proxy-1670/pods/proxy-service-vqddk-29w94/proxy/: test (200; 3.687709ms) Apr 28 13:39:46.687: INFO: (12) /api/v1/namespaces/proxy-1670/pods/proxy-service-vqddk-29w94:1080/proxy/: test<... (200; 3.76556ms) Apr 28 13:39:46.687: INFO: (12) /api/v1/namespaces/proxy-1670/pods/http:proxy-service-vqddk-29w94:160/proxy/: foo (200; 3.69314ms) Apr 28 13:39:46.687: INFO: (12) /api/v1/namespaces/proxy-1670/pods/https:proxy-service-vqddk-29w94:460/proxy/: tls baz (200; 3.738ms) Apr 28 13:39:46.687: INFO: (12) /api/v1/namespaces/proxy-1670/pods/proxy-service-vqddk-29w94:160/proxy/: foo (200; 3.682509ms) Apr 28 13:39:46.688: INFO: (12) /api/v1/namespaces/proxy-1670/services/http:proxy-service-vqddk:portname1/proxy/: foo (200; 4.496477ms) Apr 28 13:39:46.688: INFO: (12) /api/v1/namespaces/proxy-1670/services/proxy-service-vqddk:portname2/proxy/: bar (200; 4.803071ms) Apr 28 13:39:46.688: INFO: (12) /api/v1/namespaces/proxy-1670/services/proxy-service-vqddk:portname1/proxy/: foo (200; 4.88111ms) Apr 28 13:39:46.688: INFO: (12) /api/v1/namespaces/proxy-1670/services/https:proxy-service-vqddk:tlsportname2/proxy/: tls qux (200; 4.875878ms) Apr 28 13:39:46.688: INFO: (12) /api/v1/namespaces/proxy-1670/services/https:proxy-service-vqddk:tlsportname1/proxy/: tls baz (200; 4.897264ms) Apr 28 13:39:46.688: INFO: (12) /api/v1/namespaces/proxy-1670/services/http:proxy-service-vqddk:portname2/proxy/: bar (200; 4.956842ms) Apr 28 13:39:46.692: INFO: (13) /api/v1/namespaces/proxy-1670/pods/http:proxy-service-vqddk-29w94:162/proxy/: bar (200; 3.515838ms) Apr 28 13:39:46.692: INFO: (13) /api/v1/namespaces/proxy-1670/pods/https:proxy-service-vqddk-29w94:462/proxy/: tls qux (200; 3.709839ms) Apr 28 13:39:46.693: INFO: (13) /api/v1/namespaces/proxy-1670/services/http:proxy-service-vqddk:portname1/proxy/: foo (200; 4.199267ms) Apr 28 13:39:46.693: INFO: (13) /api/v1/namespaces/proxy-1670/services/http:proxy-service-vqddk:portname2/proxy/: bar (200; 4.124025ms) Apr 28 13:39:46.693: INFO: (13) /api/v1/namespaces/proxy-1670/services/proxy-service-vqddk:portname1/proxy/: foo (200; 4.637631ms) Apr 28 13:39:46.693: INFO: (13) /api/v1/namespaces/proxy-1670/pods/http:proxy-service-vqddk-29w94:160/proxy/: foo (200; 4.542755ms) Apr 28 13:39:46.693: INFO: (13) /api/v1/namespaces/proxy-1670/services/proxy-service-vqddk:portname2/proxy/: bar (200; 4.729087ms) Apr 28 13:39:46.693: INFO: (13) /api/v1/namespaces/proxy-1670/services/https:proxy-service-vqddk:tlsportname2/proxy/: tls qux (200; 4.840811ms) Apr 28 13:39:46.693: INFO: (13) /api/v1/namespaces/proxy-1670/pods/proxy-service-vqddk-29w94/proxy/: test (200; 4.943184ms) Apr 28 13:39:46.693: INFO: (13) /api/v1/namespaces/proxy-1670/pods/http:proxy-service-vqddk-29w94:1080/proxy/: ... (200; 4.863473ms) Apr 28 13:39:46.693: INFO: (13) /api/v1/namespaces/proxy-1670/pods/proxy-service-vqddk-29w94:160/proxy/: foo (200; 4.936087ms) Apr 28 13:39:46.693: INFO: (13) /api/v1/namespaces/proxy-1670/pods/https:proxy-service-vqddk-29w94:460/proxy/: tls baz (200; 4.997111ms) Apr 28 13:39:46.693: INFO: (13) /api/v1/namespaces/proxy-1670/pods/proxy-service-vqddk-29w94:1080/proxy/: test<... (200; 4.963191ms) Apr 28 13:39:46.693: INFO: (13) /api/v1/namespaces/proxy-1670/pods/proxy-service-vqddk-29w94:162/proxy/: bar (200; 5.083833ms) Apr 28 13:39:46.693: INFO: (13) /api/v1/namespaces/proxy-1670/services/https:proxy-service-vqddk:tlsportname1/proxy/: tls baz (200; 4.98599ms) Apr 28 13:39:46.693: INFO: (13) /api/v1/namespaces/proxy-1670/pods/https:proxy-service-vqddk-29w94:443/proxy/: test<... (200; 3.898566ms) Apr 28 13:39:46.698: INFO: (14) /api/v1/namespaces/proxy-1670/pods/https:proxy-service-vqddk-29w94:460/proxy/: tls baz (200; 3.934769ms) Apr 28 13:39:46.698: INFO: (14) /api/v1/namespaces/proxy-1670/pods/http:proxy-service-vqddk-29w94:1080/proxy/: ... (200; 3.94378ms) Apr 28 13:39:46.698: INFO: (14) /api/v1/namespaces/proxy-1670/pods/https:proxy-service-vqddk-29w94:462/proxy/: tls qux (200; 3.945104ms) Apr 28 13:39:46.698: INFO: (14) /api/v1/namespaces/proxy-1670/pods/http:proxy-service-vqddk-29w94:160/proxy/: foo (200; 4.04846ms) Apr 28 13:39:46.698: INFO: (14) /api/v1/namespaces/proxy-1670/pods/proxy-service-vqddk-29w94/proxy/: test (200; 4.04447ms) Apr 28 13:39:46.698: INFO: (14) /api/v1/namespaces/proxy-1670/services/http:proxy-service-vqddk:portname2/proxy/: bar (200; 4.528945ms) Apr 28 13:39:46.698: INFO: (14) /api/v1/namespaces/proxy-1670/services/proxy-service-vqddk:portname2/proxy/: bar (200; 4.668104ms) Apr 28 13:39:46.718: INFO: (14) /api/v1/namespaces/proxy-1670/services/https:proxy-service-vqddk:tlsportname2/proxy/: tls qux (200; 24.648144ms) Apr 28 13:39:46.718: INFO: (14) /api/v1/namespaces/proxy-1670/services/proxy-service-vqddk:portname1/proxy/: foo (200; 24.868239ms) Apr 28 13:39:46.718: INFO: (14) /api/v1/namespaces/proxy-1670/services/http:proxy-service-vqddk:portname1/proxy/: foo (200; 24.7879ms) Apr 28 13:39:46.718: INFO: (14) /api/v1/namespaces/proxy-1670/services/https:proxy-service-vqddk:tlsportname1/proxy/: tls baz (200; 24.786669ms) Apr 28 13:39:46.722: INFO: (15) /api/v1/namespaces/proxy-1670/pods/proxy-service-vqddk-29w94:160/proxy/: foo (200; 3.322897ms) Apr 28 13:39:46.723: INFO: (15) /api/v1/namespaces/proxy-1670/pods/proxy-service-vqddk-29w94:1080/proxy/: test<... (200; 4.478599ms) Apr 28 13:39:46.724: INFO: (15) /api/v1/namespaces/proxy-1670/pods/proxy-service-vqddk-29w94/proxy/: test (200; 5.641104ms) Apr 28 13:39:46.725: INFO: (15) /api/v1/namespaces/proxy-1670/pods/https:proxy-service-vqddk-29w94:460/proxy/: tls baz (200; 6.254413ms) Apr 28 13:39:46.725: INFO: (15) /api/v1/namespaces/proxy-1670/services/https:proxy-service-vqddk:tlsportname1/proxy/: tls baz (200; 6.710314ms) Apr 28 13:39:46.725: INFO: (15) /api/v1/namespaces/proxy-1670/services/http:proxy-service-vqddk:portname1/proxy/: foo (200; 6.818875ms) Apr 28 13:39:46.726: INFO: (15) /api/v1/namespaces/proxy-1670/services/proxy-service-vqddk:portname1/proxy/: foo (200; 7.600243ms) Apr 28 13:39:46.727: INFO: (15) /api/v1/namespaces/proxy-1670/pods/https:proxy-service-vqddk-29w94:443/proxy/: ... (200; 7.894986ms) Apr 28 13:39:46.727: INFO: (15) /api/v1/namespaces/proxy-1670/services/proxy-service-vqddk:portname2/proxy/: bar (200; 7.872167ms) Apr 28 13:39:46.727: INFO: (15) /api/v1/namespaces/proxy-1670/pods/http:proxy-service-vqddk-29w94:160/proxy/: foo (200; 7.954507ms) Apr 28 13:39:46.727: INFO: (15) /api/v1/namespaces/proxy-1670/services/https:proxy-service-vqddk:tlsportname2/proxy/: tls qux (200; 8.157418ms) Apr 28 13:39:46.727: INFO: (15) /api/v1/namespaces/proxy-1670/pods/http:proxy-service-vqddk-29w94:162/proxy/: bar (200; 8.228517ms) Apr 28 13:39:46.727: INFO: (15) /api/v1/namespaces/proxy-1670/pods/https:proxy-service-vqddk-29w94:462/proxy/: tls qux (200; 8.139201ms) Apr 28 13:39:46.727: INFO: (15) /api/v1/namespaces/proxy-1670/pods/proxy-service-vqddk-29w94:162/proxy/: bar (200; 8.237991ms) Apr 28 13:39:46.732: INFO: (16) /api/v1/namespaces/proxy-1670/pods/https:proxy-service-vqddk-29w94:462/proxy/: tls qux (200; 5.267239ms) Apr 28 13:39:46.733: INFO: (16) /api/v1/namespaces/proxy-1670/pods/http:proxy-service-vqddk-29w94:160/proxy/: foo (200; 5.831719ms) Apr 28 13:39:46.733: INFO: (16) /api/v1/namespaces/proxy-1670/pods/http:proxy-service-vqddk-29w94:162/proxy/: bar (200; 5.694624ms) Apr 28 13:39:46.733: INFO: (16) /api/v1/namespaces/proxy-1670/pods/https:proxy-service-vqddk-29w94:460/proxy/: tls baz (200; 5.795795ms) Apr 28 13:39:46.733: INFO: (16) /api/v1/namespaces/proxy-1670/pods/proxy-service-vqddk-29w94:162/proxy/: bar (200; 5.997869ms) Apr 28 13:39:46.733: INFO: (16) /api/v1/namespaces/proxy-1670/services/http:proxy-service-vqddk:portname2/proxy/: bar (200; 6.346929ms) Apr 28 13:39:46.734: INFO: (16) /api/v1/namespaces/proxy-1670/pods/http:proxy-service-vqddk-29w94:1080/proxy/: ... (200; 6.207835ms) Apr 28 13:39:46.734: INFO: (16) /api/v1/namespaces/proxy-1670/pods/proxy-service-vqddk-29w94/proxy/: test (200; 6.312068ms) Apr 28 13:39:46.734: INFO: (16) /api/v1/namespaces/proxy-1670/pods/proxy-service-vqddk-29w94:160/proxy/: foo (200; 6.429121ms) Apr 28 13:39:46.734: INFO: (16) /api/v1/namespaces/proxy-1670/services/proxy-service-vqddk:portname2/proxy/: bar (200; 6.787216ms) Apr 28 13:39:46.734: INFO: (16) /api/v1/namespaces/proxy-1670/services/https:proxy-service-vqddk:tlsportname2/proxy/: tls qux (200; 6.98049ms) Apr 28 13:39:46.734: INFO: (16) /api/v1/namespaces/proxy-1670/services/https:proxy-service-vqddk:tlsportname1/proxy/: tls baz (200; 6.942199ms) Apr 28 13:39:46.734: INFO: (16) /api/v1/namespaces/proxy-1670/services/http:proxy-service-vqddk:portname1/proxy/: foo (200; 6.911272ms) Apr 28 13:39:46.734: INFO: (16) /api/v1/namespaces/proxy-1670/services/proxy-service-vqddk:portname1/proxy/: foo (200; 6.980616ms) Apr 28 13:39:46.734: INFO: (16) /api/v1/namespaces/proxy-1670/pods/proxy-service-vqddk-29w94:1080/proxy/: test<... (200; 7.201232ms) Apr 28 13:39:46.734: INFO: (16) /api/v1/namespaces/proxy-1670/pods/https:proxy-service-vqddk-29w94:443/proxy/: test<... (200; 5.912971ms) Apr 28 13:39:46.740: INFO: (17) /api/v1/namespaces/proxy-1670/pods/https:proxy-service-vqddk-29w94:443/proxy/: test (200; 6.579367ms) Apr 28 13:39:46.741: INFO: (17) /api/v1/namespaces/proxy-1670/pods/http:proxy-service-vqddk-29w94:1080/proxy/: ... (200; 6.650571ms) Apr 28 13:39:46.741: INFO: (17) /api/v1/namespaces/proxy-1670/services/https:proxy-service-vqddk:tlsportname1/proxy/: tls baz (200; 6.744945ms) Apr 28 13:39:46.741: INFO: (17) /api/v1/namespaces/proxy-1670/services/proxy-service-vqddk:portname2/proxy/: bar (200; 6.724221ms) Apr 28 13:39:46.742: INFO: (17) /api/v1/namespaces/proxy-1670/services/http:proxy-service-vqddk:portname1/proxy/: foo (200; 7.070158ms) Apr 28 13:39:46.745: INFO: (18) /api/v1/namespaces/proxy-1670/pods/http:proxy-service-vqddk-29w94:162/proxy/: bar (200; 3.330061ms) Apr 28 13:39:46.749: INFO: (18) /api/v1/namespaces/proxy-1670/services/proxy-service-vqddk:portname2/proxy/: bar (200; 7.206712ms) Apr 28 13:39:46.749: INFO: (18) /api/v1/namespaces/proxy-1670/services/http:proxy-service-vqddk:portname2/proxy/: bar (200; 7.422205ms) Apr 28 13:39:46.749: INFO: (18) /api/v1/namespaces/proxy-1670/pods/proxy-service-vqddk-29w94/proxy/: test (200; 7.431848ms) Apr 28 13:39:46.750: INFO: (18) /api/v1/namespaces/proxy-1670/services/proxy-service-vqddk:portname1/proxy/: foo (200; 8.08089ms) Apr 28 13:39:46.750: INFO: (18) /api/v1/namespaces/proxy-1670/pods/proxy-service-vqddk-29w94:162/proxy/: bar (200; 8.153788ms) Apr 28 13:39:46.750: INFO: (18) /api/v1/namespaces/proxy-1670/services/https:proxy-service-vqddk:tlsportname1/proxy/: tls baz (200; 8.039932ms) Apr 28 13:39:46.750: INFO: (18) /api/v1/namespaces/proxy-1670/pods/https:proxy-service-vqddk-29w94:443/proxy/: test<... (200; 8.175874ms) Apr 28 13:39:46.750: INFO: (18) /api/v1/namespaces/proxy-1670/pods/http:proxy-service-vqddk-29w94:1080/proxy/: ... (200; 8.342087ms) Apr 28 13:39:46.750: INFO: (18) /api/v1/namespaces/proxy-1670/pods/proxy-service-vqddk-29w94:160/proxy/: foo (200; 8.329505ms) Apr 28 13:39:46.750: INFO: (18) /api/v1/namespaces/proxy-1670/pods/https:proxy-service-vqddk-29w94:462/proxy/: tls qux (200; 8.424728ms) Apr 28 13:39:46.750: INFO: (18) /api/v1/namespaces/proxy-1670/pods/http:proxy-service-vqddk-29w94:160/proxy/: foo (200; 8.276786ms) Apr 28 13:39:46.750: INFO: (18) /api/v1/namespaces/proxy-1670/services/https:proxy-service-vqddk:tlsportname2/proxy/: tls qux (200; 8.368749ms) Apr 28 13:39:46.750: INFO: (18) /api/v1/namespaces/proxy-1670/services/http:proxy-service-vqddk:portname1/proxy/: foo (200; 8.579793ms) Apr 28 13:39:46.755: INFO: (19) /api/v1/namespaces/proxy-1670/pods/http:proxy-service-vqddk-29w94:162/proxy/: bar (200; 4.475418ms) Apr 28 13:39:46.755: INFO: (19) /api/v1/namespaces/proxy-1670/pods/http:proxy-service-vqddk-29w94:1080/proxy/: ... (200; 4.405765ms) Apr 28 13:39:46.755: INFO: (19) /api/v1/namespaces/proxy-1670/pods/https:proxy-service-vqddk-29w94:460/proxy/: tls baz (200; 4.431742ms) Apr 28 13:39:46.756: INFO: (19) /api/v1/namespaces/proxy-1670/pods/https:proxy-service-vqddk-29w94:462/proxy/: tls qux (200; 5.569688ms) Apr 28 13:39:46.756: INFO: (19) /api/v1/namespaces/proxy-1670/pods/proxy-service-vqddk-29w94:160/proxy/: foo (200; 5.571297ms) Apr 28 13:39:46.756: INFO: (19) /api/v1/namespaces/proxy-1670/pods/proxy-service-vqddk-29w94:1080/proxy/: test<... (200; 5.817403ms) Apr 28 13:39:46.757: INFO: (19) /api/v1/namespaces/proxy-1670/pods/https:proxy-service-vqddk-29w94:443/proxy/: test (200; 6.173384ms) Apr 28 13:39:46.757: INFO: (19) /api/v1/namespaces/proxy-1670/services/http:proxy-service-vqddk:portname2/proxy/: bar (200; 6.189168ms) Apr 28 13:39:46.757: INFO: (19) /api/v1/namespaces/proxy-1670/services/proxy-service-vqddk:portname1/proxy/: foo (200; 6.668346ms) Apr 28 13:39:46.757: INFO: (19) /api/v1/namespaces/proxy-1670/services/proxy-service-vqddk:portname2/proxy/: bar (200; 6.28736ms) Apr 28 13:39:46.757: INFO: (19) /api/v1/namespaces/proxy-1670/pods/http:proxy-service-vqddk-29w94:160/proxy/: foo (200; 6.620341ms) Apr 28 13:39:46.757: INFO: (19) /api/v1/namespaces/proxy-1670/pods/proxy-service-vqddk-29w94:162/proxy/: bar (200; 6.619745ms) Apr 28 13:39:46.757: INFO: (19) /api/v1/namespaces/proxy-1670/services/https:proxy-service-vqddk:tlsportname2/proxy/: tls qux (200; 6.820621ms) Apr 28 13:39:46.757: INFO: (19) /api/v1/namespaces/proxy-1670/services/http:proxy-service-vqddk:portname1/proxy/: foo (200; 6.820355ms) STEP: deleting ReplicationController proxy-service-vqddk in namespace proxy-1670, will wait for the garbage collector to delete the pods Apr 28 13:39:46.817: INFO: Deleting ReplicationController proxy-service-vqddk took: 7.305439ms Apr 28 13:39:47.117: INFO: Terminating ReplicationController proxy-service-vqddk pods took: 300.227151ms [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 13:39:51.918: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-1670" for this suite. Apr 28 13:39:57.943: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:39:58.015: INFO: namespace proxy-1670 deletion completed in 6.092953587s • [SLOW TEST:16.621 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 13:39:58.016: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Apr 28 13:40:02.708: INFO: Successfully updated pod "labelsupdateb889bad6-3bed-4a36-8ebe-9afed8f54885" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 13:40:04.752: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-704" for this suite. Apr 28 13:40:26.766: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:40:26.864: INFO: namespace projected-704 deletion completed in 22.108065002s • [SLOW TEST:28.848 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 13:40:26.865: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-92a44d58-93b2-4229-986d-c6131c5d0d93 STEP: Creating configMap with name cm-test-opt-upd-6340f288-4ab0-418a-8d30-c702e9ad7f63 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-92a44d58-93b2-4229-986d-c6131c5d0d93 STEP: Updating configmap cm-test-opt-upd-6340f288-4ab0-418a-8d30-c702e9ad7f63 STEP: Creating configMap with name cm-test-opt-create-df331557-f44e-4eed-bf6c-e081884e11d8 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 13:41:51.445: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5948" for this suite. Apr 28 13:42:13.467: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:42:13.547: INFO: namespace projected-5948 deletion completed in 22.098459548s • [SLOW TEST:106.683 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 13:42:13.548: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating replication controller svc-latency-rc in namespace svc-latency-9607 I0428 13:42:13.620004 6 runners.go:180] Created replication controller with name: svc-latency-rc, namespace: svc-latency-9607, replica count: 1 I0428 13:42:14.670506 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0428 13:42:15.670744 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0428 13:42:16.671049 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 28 13:42:16.800: INFO: Created: latency-svc-j82fp Apr 28 13:42:16.811: INFO: Got endpoints: latency-svc-j82fp [40.09132ms] Apr 28 13:42:16.830: INFO: Created: latency-svc-cbdpv Apr 28 13:42:16.876: INFO: Got endpoints: latency-svc-cbdpv [64.465058ms] Apr 28 13:42:16.889: INFO: Created: latency-svc-zmm6g Apr 28 13:42:16.908: INFO: Got endpoints: latency-svc-zmm6g [96.34148ms] Apr 28 13:42:16.932: INFO: Created: latency-svc-zxxqf Apr 28 13:42:16.951: INFO: Got endpoints: latency-svc-zxxqf [140.410051ms] Apr 28 13:42:17.026: INFO: Created: latency-svc-6996p Apr 28 13:42:17.030: INFO: Got endpoints: latency-svc-6996p [219.332574ms] Apr 28 13:42:17.106: INFO: Created: latency-svc-767mn Apr 28 13:42:17.118: INFO: Got endpoints: latency-svc-767mn [307.069306ms] Apr 28 13:42:17.176: INFO: Created: latency-svc-k8d49 Apr 28 13:42:17.179: INFO: Got endpoints: latency-svc-k8d49 [367.244608ms] Apr 28 13:42:17.245: INFO: Created: latency-svc-hqnzw Apr 28 13:42:17.257: INFO: Got endpoints: latency-svc-hqnzw [445.267633ms] Apr 28 13:42:17.320: INFO: Created: latency-svc-t2jjv Apr 28 13:42:17.358: INFO: Got endpoints: latency-svc-t2jjv [546.711476ms] Apr 28 13:42:17.388: INFO: Created: latency-svc-z862c Apr 28 13:42:17.487: INFO: Got endpoints: latency-svc-z862c [675.644889ms] Apr 28 13:42:17.508: INFO: Created: latency-svc-wjbsf Apr 28 13:42:17.518: INFO: Got endpoints: latency-svc-wjbsf [706.959738ms] Apr 28 13:42:17.543: INFO: Created: latency-svc-tvm59 Apr 28 13:42:17.669: INFO: Got endpoints: latency-svc-tvm59 [857.844095ms] Apr 28 13:42:17.678: INFO: Created: latency-svc-w94t9 Apr 28 13:42:17.692: INFO: Got endpoints: latency-svc-w94t9 [880.355258ms] Apr 28 13:42:17.718: INFO: Created: latency-svc-942b2 Apr 28 13:42:17.741: INFO: Got endpoints: latency-svc-942b2 [929.870965ms] Apr 28 13:42:17.823: INFO: Created: latency-svc-b88hf Apr 28 13:42:17.830: INFO: Got endpoints: latency-svc-b88hf [1.018561519s] Apr 28 13:42:17.850: INFO: Created: latency-svc-4kg4k Apr 28 13:42:17.866: INFO: Got endpoints: latency-svc-4kg4k [1.055055359s] Apr 28 13:42:17.894: INFO: Created: latency-svc-dvqgr Apr 28 13:42:17.906: INFO: Got endpoints: latency-svc-dvqgr [1.030536945s] Apr 28 13:42:17.962: INFO: Created: latency-svc-rbrdj Apr 28 13:42:17.977: INFO: Got endpoints: latency-svc-rbrdj [1.069000822s] Apr 28 13:42:18.006: INFO: Created: latency-svc-4c8sk Apr 28 13:42:18.014: INFO: Got endpoints: latency-svc-4c8sk [1.062887507s] Apr 28 13:42:18.042: INFO: Created: latency-svc-hqxzp Apr 28 13:42:18.051: INFO: Got endpoints: latency-svc-hqxzp [1.020768172s] Apr 28 13:42:18.104: INFO: Created: latency-svc-m7cnp Apr 28 13:42:18.111: INFO: Got endpoints: latency-svc-m7cnp [992.94929ms] Apr 28 13:42:18.132: INFO: Created: latency-svc-knk2g Apr 28 13:42:18.148: INFO: Got endpoints: latency-svc-knk2g [969.163157ms] Apr 28 13:42:18.167: INFO: Created: latency-svc-zklq8 Apr 28 13:42:18.184: INFO: Got endpoints: latency-svc-zklq8 [927.221893ms] Apr 28 13:42:18.204: INFO: Created: latency-svc-ts2mw Apr 28 13:42:18.264: INFO: Got endpoints: latency-svc-ts2mw [905.707429ms] Apr 28 13:42:18.264: INFO: Created: latency-svc-dcs7j Apr 28 13:42:18.281: INFO: Got endpoints: latency-svc-dcs7j [794.005976ms] Apr 28 13:42:18.300: INFO: Created: latency-svc-sd89z Apr 28 13:42:18.317: INFO: Got endpoints: latency-svc-sd89z [798.704482ms] Apr 28 13:42:18.336: INFO: Created: latency-svc-9s5q8 Apr 28 13:42:18.397: INFO: Got endpoints: latency-svc-9s5q8 [727.539614ms] Apr 28 13:42:18.400: INFO: Created: latency-svc-qb4bn Apr 28 13:42:18.420: INFO: Got endpoints: latency-svc-qb4bn [728.39834ms] Apr 28 13:42:18.462: INFO: Created: latency-svc-s5nms Apr 28 13:42:18.474: INFO: Got endpoints: latency-svc-s5nms [732.124423ms] Apr 28 13:42:18.589: INFO: Created: latency-svc-b28sv Apr 28 13:42:18.593: INFO: Got endpoints: latency-svc-b28sv [763.250918ms] Apr 28 13:42:18.642: INFO: Created: latency-svc-sc422 Apr 28 13:42:18.675: INFO: Got endpoints: latency-svc-sc422 [808.338674ms] Apr 28 13:42:18.715: INFO: Created: latency-svc-8567f Apr 28 13:42:18.717: INFO: Got endpoints: latency-svc-8567f [811.173026ms] Apr 28 13:42:18.744: INFO: Created: latency-svc-xbchd Apr 28 13:42:18.759: INFO: Got endpoints: latency-svc-xbchd [782.145572ms] Apr 28 13:42:18.780: INFO: Created: latency-svc-lxdmz Apr 28 13:42:18.810: INFO: Got endpoints: latency-svc-lxdmz [795.189463ms] Apr 28 13:42:18.864: INFO: Created: latency-svc-c57n5 Apr 28 13:42:18.867: INFO: Got endpoints: latency-svc-c57n5 [815.757948ms] Apr 28 13:42:18.894: INFO: Created: latency-svc-6jl6r Apr 28 13:42:18.903: INFO: Got endpoints: latency-svc-6jl6r [792.152633ms] Apr 28 13:42:18.930: INFO: Created: latency-svc-hbhsm Apr 28 13:42:18.947: INFO: Got endpoints: latency-svc-hbhsm [799.234513ms] Apr 28 13:42:19.016: INFO: Created: latency-svc-rrk6q Apr 28 13:42:19.019: INFO: Got endpoints: latency-svc-rrk6q [834.690745ms] Apr 28 13:42:19.038: INFO: Created: latency-svc-8qhzc Apr 28 13:42:19.055: INFO: Got endpoints: latency-svc-8qhzc [791.350525ms] Apr 28 13:42:19.087: INFO: Created: latency-svc-d49qx Apr 28 13:42:19.103: INFO: Got endpoints: latency-svc-d49qx [821.641915ms] Apr 28 13:42:19.146: INFO: Created: latency-svc-xpzh4 Apr 28 13:42:19.149: INFO: Got endpoints: latency-svc-xpzh4 [832.022031ms] Apr 28 13:42:19.175: INFO: Created: latency-svc-9fcpv Apr 28 13:42:19.188: INFO: Got endpoints: latency-svc-9fcpv [791.007168ms] Apr 28 13:42:19.212: INFO: Created: latency-svc-sxcfz Apr 28 13:42:19.235: INFO: Got endpoints: latency-svc-sxcfz [814.865432ms] Apr 28 13:42:19.295: INFO: Created: latency-svc-mrlmh Apr 28 13:42:19.302: INFO: Got endpoints: latency-svc-mrlmh [828.656657ms] Apr 28 13:42:19.326: INFO: Created: latency-svc-nn2bw Apr 28 13:42:19.340: INFO: Got endpoints: latency-svc-nn2bw [746.41723ms] Apr 28 13:42:19.363: INFO: Created: latency-svc-rj4nl Apr 28 13:42:19.385: INFO: Got endpoints: latency-svc-rj4nl [710.022184ms] Apr 28 13:42:19.445: INFO: Created: latency-svc-xwq79 Apr 28 13:42:19.453: INFO: Got endpoints: latency-svc-xwq79 [735.904715ms] Apr 28 13:42:19.476: INFO: Created: latency-svc-qcpcs Apr 28 13:42:19.489: INFO: Got endpoints: latency-svc-qcpcs [730.247347ms] Apr 28 13:42:19.512: INFO: Created: latency-svc-tbtqn Apr 28 13:42:19.526: INFO: Got endpoints: latency-svc-tbtqn [715.829002ms] Apr 28 13:42:19.589: INFO: Created: latency-svc-wkrtt Apr 28 13:42:19.613: INFO: Got endpoints: latency-svc-wkrtt [746.103727ms] Apr 28 13:42:19.614: INFO: Created: latency-svc-9sr59 Apr 28 13:42:19.628: INFO: Got endpoints: latency-svc-9sr59 [724.258962ms] Apr 28 13:42:19.655: INFO: Created: latency-svc-sbmhs Apr 28 13:42:19.726: INFO: Got endpoints: latency-svc-sbmhs [779.040556ms] Apr 28 13:42:19.740: INFO: Created: latency-svc-vzk95 Apr 28 13:42:19.755: INFO: Got endpoints: latency-svc-vzk95 [736.059781ms] Apr 28 13:42:19.776: INFO: Created: latency-svc-j5tmm Apr 28 13:42:19.785: INFO: Got endpoints: latency-svc-j5tmm [729.875735ms] Apr 28 13:42:19.817: INFO: Created: latency-svc-brjz8 Apr 28 13:42:19.859: INFO: Got endpoints: latency-svc-brjz8 [756.195656ms] Apr 28 13:42:19.890: INFO: Created: latency-svc-k2x64 Apr 28 13:42:19.906: INFO: Got endpoints: latency-svc-k2x64 [756.493066ms] Apr 28 13:42:19.926: INFO: Created: latency-svc-tn9wj Apr 28 13:42:19.942: INFO: Got endpoints: latency-svc-tn9wj [753.768016ms] Apr 28 13:42:19.990: INFO: Created: latency-svc-2bkht Apr 28 13:42:19.996: INFO: Got endpoints: latency-svc-2bkht [760.780469ms] Apr 28 13:42:20.022: INFO: Created: latency-svc-fkl6k Apr 28 13:42:20.039: INFO: Got endpoints: latency-svc-fkl6k [736.492032ms] Apr 28 13:42:20.063: INFO: Created: latency-svc-n9wqz Apr 28 13:42:20.075: INFO: Got endpoints: latency-svc-n9wqz [734.94925ms] Apr 28 13:42:20.122: INFO: Created: latency-svc-vw7jl Apr 28 13:42:20.125: INFO: Got endpoints: latency-svc-vw7jl [739.507862ms] Apr 28 13:42:20.147: INFO: Created: latency-svc-c8qdc Apr 28 13:42:20.159: INFO: Got endpoints: latency-svc-c8qdc [705.530217ms] Apr 28 13:42:20.184: INFO: Created: latency-svc-76228 Apr 28 13:42:20.195: INFO: Got endpoints: latency-svc-76228 [706.296658ms] Apr 28 13:42:20.220: INFO: Created: latency-svc-dhwgq Apr 28 13:42:20.289: INFO: Got endpoints: latency-svc-dhwgq [763.306746ms] Apr 28 13:42:20.309: INFO: Created: latency-svc-44wrc Apr 28 13:42:20.322: INFO: Got endpoints: latency-svc-44wrc [708.93707ms] Apr 28 13:42:20.340: INFO: Created: latency-svc-j46qf Apr 28 13:42:20.352: INFO: Got endpoints: latency-svc-j46qf [724.375813ms] Apr 28 13:42:20.375: INFO: Created: latency-svc-b6vkb Apr 28 13:42:20.433: INFO: Got endpoints: latency-svc-b6vkb [707.232078ms] Apr 28 13:42:20.453: INFO: Created: latency-svc-z8dvg Apr 28 13:42:20.474: INFO: Got endpoints: latency-svc-z8dvg [718.874526ms] Apr 28 13:42:20.496: INFO: Created: latency-svc-bksv6 Apr 28 13:42:20.510: INFO: Got endpoints: latency-svc-bksv6 [724.362189ms] Apr 28 13:42:20.589: INFO: Created: latency-svc-mfbfw Apr 28 13:42:20.597: INFO: Got endpoints: latency-svc-mfbfw [738.303637ms] Apr 28 13:42:20.633: INFO: Created: latency-svc-rwhzc Apr 28 13:42:20.648: INFO: Got endpoints: latency-svc-rwhzc [742.360472ms] Apr 28 13:42:20.669: INFO: Created: latency-svc-cqgnq Apr 28 13:42:20.684: INFO: Got endpoints: latency-svc-cqgnq [741.977211ms] Apr 28 13:42:20.769: INFO: Created: latency-svc-p7j2l Apr 28 13:42:20.774: INFO: Got endpoints: latency-svc-p7j2l [777.922274ms] Apr 28 13:42:20.808: INFO: Created: latency-svc-s6nnc Apr 28 13:42:20.816: INFO: Got endpoints: latency-svc-s6nnc [777.461894ms] Apr 28 13:42:20.837: INFO: Created: latency-svc-xvbhz Apr 28 13:42:20.853: INFO: Got endpoints: latency-svc-xvbhz [778.257666ms] Apr 28 13:42:20.906: INFO: Created: latency-svc-gm2gs Apr 28 13:42:20.910: INFO: Got endpoints: latency-svc-gm2gs [784.993127ms] Apr 28 13:42:20.959: INFO: Created: latency-svc-ppxcc Apr 28 13:42:20.971: INFO: Got endpoints: latency-svc-ppxcc [811.975073ms] Apr 28 13:42:20.989: INFO: Created: latency-svc-csjzx Apr 28 13:42:21.001: INFO: Got endpoints: latency-svc-csjzx [805.602423ms] Apr 28 13:42:21.068: INFO: Created: latency-svc-th4qm Apr 28 13:42:21.071: INFO: Got endpoints: latency-svc-th4qm [781.643617ms] Apr 28 13:42:21.120: INFO: Created: latency-svc-cqpj5 Apr 28 13:42:21.133: INFO: Got endpoints: latency-svc-cqpj5 [811.179131ms] Apr 28 13:42:21.156: INFO: Created: latency-svc-zxb5t Apr 28 13:42:21.211: INFO: Got endpoints: latency-svc-zxb5t [858.6998ms] Apr 28 13:42:21.214: INFO: Created: latency-svc-76dk9 Apr 28 13:42:21.224: INFO: Got endpoints: latency-svc-76dk9 [790.481791ms] Apr 28 13:42:21.246: INFO: Created: latency-svc-sskc4 Apr 28 13:42:21.261: INFO: Got endpoints: latency-svc-sskc4 [786.790498ms] Apr 28 13:42:21.282: INFO: Created: latency-svc-pzgfj Apr 28 13:42:21.292: INFO: Got endpoints: latency-svc-pzgfj [782.014533ms] Apr 28 13:42:21.367: INFO: Created: latency-svc-wv2k9 Apr 28 13:42:21.390: INFO: Got endpoints: latency-svc-wv2k9 [792.114024ms] Apr 28 13:42:21.421: INFO: Created: latency-svc-ht5wz Apr 28 13:42:21.436: INFO: Got endpoints: latency-svc-ht5wz [788.32758ms] Apr 28 13:42:21.456: INFO: Created: latency-svc-fj5s9 Apr 28 13:42:21.510: INFO: Got endpoints: latency-svc-fj5s9 [826.360945ms] Apr 28 13:42:21.512: INFO: Created: latency-svc-p6d4l Apr 28 13:42:21.527: INFO: Got endpoints: latency-svc-p6d4l [752.734881ms] Apr 28 13:42:21.551: INFO: Created: latency-svc-vcfpv Apr 28 13:42:21.574: INFO: Got endpoints: latency-svc-vcfpv [758.062319ms] Apr 28 13:42:21.636: INFO: Created: latency-svc-rj58n Apr 28 13:42:21.652: INFO: Got endpoints: latency-svc-rj58n [799.288148ms] Apr 28 13:42:21.678: INFO: Created: latency-svc-jzh4c Apr 28 13:42:21.707: INFO: Got endpoints: latency-svc-jzh4c [797.478219ms] Apr 28 13:42:21.780: INFO: Created: latency-svc-hwfs5 Apr 28 13:42:21.822: INFO: Created: latency-svc-f9jhd Apr 28 13:42:21.822: INFO: Got endpoints: latency-svc-hwfs5 [850.947103ms] Apr 28 13:42:21.834: INFO: Got endpoints: latency-svc-f9jhd [832.29943ms] Apr 28 13:42:21.858: INFO: Created: latency-svc-bslp2 Apr 28 13:42:21.935: INFO: Got endpoints: latency-svc-bslp2 [864.685859ms] Apr 28 13:42:21.939: INFO: Created: latency-svc-kxczg Apr 28 13:42:21.942: INFO: Got endpoints: latency-svc-kxczg [808.074865ms] Apr 28 13:42:21.983: INFO: Created: latency-svc-qpxbc Apr 28 13:42:21.996: INFO: Got endpoints: latency-svc-qpxbc [784.893275ms] Apr 28 13:42:22.020: INFO: Created: latency-svc-nvc2m Apr 28 13:42:22.033: INFO: Got endpoints: latency-svc-nvc2m [809.419213ms] Apr 28 13:42:22.081: INFO: Created: latency-svc-qqbbv Apr 28 13:42:22.083: INFO: Got endpoints: latency-svc-qqbbv [822.258793ms] Apr 28 13:42:22.110: INFO: Created: latency-svc-x8756 Apr 28 13:42:22.123: INFO: Got endpoints: latency-svc-x8756 [831.103818ms] Apr 28 13:42:22.151: INFO: Created: latency-svc-hnmh5 Apr 28 13:42:22.153: INFO: Got endpoints: latency-svc-hnmh5 [763.510125ms] Apr 28 13:42:22.217: INFO: Created: latency-svc-6m2ch Apr 28 13:42:22.221: INFO: Got endpoints: latency-svc-6m2ch [784.371771ms] Apr 28 13:42:22.248: INFO: Created: latency-svc-gv9rh Apr 28 13:42:22.262: INFO: Got endpoints: latency-svc-gv9rh [752.107526ms] Apr 28 13:42:22.286: INFO: Created: latency-svc-zx6k8 Apr 28 13:42:22.314: INFO: Got endpoints: latency-svc-zx6k8 [787.082912ms] Apr 28 13:42:22.373: INFO: Created: latency-svc-4zjms Apr 28 13:42:22.415: INFO: Got endpoints: latency-svc-4zjms [840.574277ms] Apr 28 13:42:22.416: INFO: Created: latency-svc-g9wjg Apr 28 13:42:22.425: INFO: Got endpoints: latency-svc-g9wjg [772.213053ms] Apr 28 13:42:22.446: INFO: Created: latency-svc-n8vs2 Apr 28 13:42:22.522: INFO: Got endpoints: latency-svc-n8vs2 [814.981541ms] Apr 28 13:42:22.541: INFO: Created: latency-svc-s6pxp Apr 28 13:42:22.558: INFO: Got endpoints: latency-svc-s6pxp [735.736348ms] Apr 28 13:42:22.589: INFO: Created: latency-svc-4b7t4 Apr 28 13:42:22.600: INFO: Got endpoints: latency-svc-4b7t4 [765.996795ms] Apr 28 13:42:22.619: INFO: Created: latency-svc-wnc5w Apr 28 13:42:22.690: INFO: Got endpoints: latency-svc-wnc5w [754.909466ms] Apr 28 13:42:22.721: INFO: Created: latency-svc-xc6gs Apr 28 13:42:22.739: INFO: Got endpoints: latency-svc-xc6gs [797.07433ms] Apr 28 13:42:22.763: INFO: Created: latency-svc-m2785 Apr 28 13:42:22.774: INFO: Got endpoints: latency-svc-m2785 [778.483247ms] Apr 28 13:42:22.840: INFO: Created: latency-svc-pzm6s Apr 28 13:42:22.866: INFO: Got endpoints: latency-svc-pzm6s [832.192228ms] Apr 28 13:42:22.896: INFO: Created: latency-svc-mlcnt Apr 28 13:42:22.913: INFO: Got endpoints: latency-svc-mlcnt [830.067168ms] Apr 28 13:42:22.937: INFO: Created: latency-svc-jgjq5 Apr 28 13:42:22.965: INFO: Got endpoints: latency-svc-jgjq5 [842.567149ms] Apr 28 13:42:22.992: INFO: Created: latency-svc-5mnk4 Apr 28 13:42:23.027: INFO: Got endpoints: latency-svc-5mnk4 [873.968177ms] Apr 28 13:42:23.063: INFO: Created: latency-svc-kdzn8 Apr 28 13:42:23.121: INFO: Got endpoints: latency-svc-kdzn8 [900.218787ms] Apr 28 13:42:23.124: INFO: Created: latency-svc-9f9bz Apr 28 13:42:23.136: INFO: Got endpoints: latency-svc-9f9bz [873.245836ms] Apr 28 13:42:23.160: INFO: Created: latency-svc-7cjz4 Apr 28 13:42:23.172: INFO: Got endpoints: latency-svc-7cjz4 [858.559605ms] Apr 28 13:42:23.195: INFO: Created: latency-svc-nl2qc Apr 28 13:42:23.209: INFO: Got endpoints: latency-svc-nl2qc [793.425622ms] Apr 28 13:42:23.278: INFO: Created: latency-svc-8t6x5 Apr 28 13:42:23.303: INFO: Created: latency-svc-tdb9z Apr 28 13:42:23.304: INFO: Got endpoints: latency-svc-8t6x5 [878.988264ms] Apr 28 13:42:23.328: INFO: Got endpoints: latency-svc-tdb9z [805.929535ms] Apr 28 13:42:23.357: INFO: Created: latency-svc-bsjpx Apr 28 13:42:23.365: INFO: Got endpoints: latency-svc-bsjpx [807.700277ms] Apr 28 13:42:23.403: INFO: Created: latency-svc-qwxnw Apr 28 13:42:23.408: INFO: Got endpoints: latency-svc-qwxnw [807.923825ms] Apr 28 13:42:23.429: INFO: Created: latency-svc-p2j5s Apr 28 13:42:23.446: INFO: Got endpoints: latency-svc-p2j5s [755.441143ms] Apr 28 13:42:23.466: INFO: Created: latency-svc-dv6zd Apr 28 13:42:23.476: INFO: Got endpoints: latency-svc-dv6zd [737.178064ms] Apr 28 13:42:23.502: INFO: Created: latency-svc-dm9rb Apr 28 13:42:23.564: INFO: Got endpoints: latency-svc-dm9rb [789.821007ms] Apr 28 13:42:23.566: INFO: Created: latency-svc-thzh5 Apr 28 13:42:23.572: INFO: Got endpoints: latency-svc-thzh5 [706.417454ms] Apr 28 13:42:23.604: INFO: Created: latency-svc-7xjfd Apr 28 13:42:23.621: INFO: Got endpoints: latency-svc-7xjfd [707.890855ms] Apr 28 13:42:23.639: INFO: Created: latency-svc-6tc6t Apr 28 13:42:23.651: INFO: Got endpoints: latency-svc-6tc6t [685.441542ms] Apr 28 13:42:23.697: INFO: Created: latency-svc-gl9x4 Apr 28 13:42:23.699: INFO: Got endpoints: latency-svc-gl9x4 [672.222883ms] Apr 28 13:42:23.724: INFO: Created: latency-svc-rdt8x Apr 28 13:42:23.736: INFO: Got endpoints: latency-svc-rdt8x [614.672106ms] Apr 28 13:42:23.760: INFO: Created: latency-svc-7dnrp Apr 28 13:42:23.783: INFO: Got endpoints: latency-svc-7dnrp [647.482142ms] Apr 28 13:42:23.835: INFO: Created: latency-svc-8rcpx Apr 28 13:42:23.838: INFO: Got endpoints: latency-svc-8rcpx [665.904105ms] Apr 28 13:42:23.879: INFO: Created: latency-svc-xxqc7 Apr 28 13:42:23.893: INFO: Got endpoints: latency-svc-xxqc7 [684.501257ms] Apr 28 13:42:23.910: INFO: Created: latency-svc-c2wwt Apr 28 13:42:23.934: INFO: Got endpoints: latency-svc-c2wwt [629.837748ms] Apr 28 13:42:23.984: INFO: Created: latency-svc-vsv6z Apr 28 13:42:23.989: INFO: Got endpoints: latency-svc-vsv6z [661.01792ms] Apr 28 13:42:24.018: INFO: Created: latency-svc-glcwp Apr 28 13:42:24.031: INFO: Got endpoints: latency-svc-glcwp [665.899752ms] Apr 28 13:42:24.053: INFO: Created: latency-svc-vcjqn Apr 28 13:42:24.068: INFO: Got endpoints: latency-svc-vcjqn [660.083066ms] Apr 28 13:42:24.122: INFO: Created: latency-svc-hfdrs Apr 28 13:42:24.125: INFO: Got endpoints: latency-svc-hfdrs [678.98326ms] Apr 28 13:42:24.149: INFO: Created: latency-svc-hn9gv Apr 28 13:42:24.158: INFO: Got endpoints: latency-svc-hn9gv [681.937157ms] Apr 28 13:42:24.179: INFO: Created: latency-svc-scfnh Apr 28 13:42:24.195: INFO: Got endpoints: latency-svc-scfnh [630.196948ms] Apr 28 13:42:24.215: INFO: Created: latency-svc-djfr8 Apr 28 13:42:24.253: INFO: Got endpoints: latency-svc-djfr8 [680.700329ms] Apr 28 13:42:24.270: INFO: Created: latency-svc-zrv62 Apr 28 13:42:24.286: INFO: Got endpoints: latency-svc-zrv62 [665.145298ms] Apr 28 13:42:24.312: INFO: Created: latency-svc-ptkrm Apr 28 13:42:24.322: INFO: Got endpoints: latency-svc-ptkrm [670.969348ms] Apr 28 13:42:24.391: INFO: Created: latency-svc-798j4 Apr 28 13:42:24.393: INFO: Got endpoints: latency-svc-798j4 [693.830617ms] Apr 28 13:42:24.456: INFO: Created: latency-svc-kq2tf Apr 28 13:42:24.472: INFO: Got endpoints: latency-svc-kq2tf [736.000114ms] Apr 28 13:42:24.553: INFO: Created: latency-svc-zb2jw Apr 28 13:42:24.555: INFO: Got endpoints: latency-svc-zb2jw [772.073374ms] Apr 28 13:42:24.594: INFO: Created: latency-svc-v5crr Apr 28 13:42:24.611: INFO: Got endpoints: latency-svc-v5crr [772.047394ms] Apr 28 13:42:24.642: INFO: Created: latency-svc-p9g5r Apr 28 13:42:24.696: INFO: Got endpoints: latency-svc-p9g5r [802.923309ms] Apr 28 13:42:24.698: INFO: Created: latency-svc-mjq8w Apr 28 13:42:24.715: INFO: Got endpoints: latency-svc-mjq8w [781.301265ms] Apr 28 13:42:24.755: INFO: Created: latency-svc-7n4d5 Apr 28 13:42:24.767: INFO: Got endpoints: latency-svc-7n4d5 [778.004629ms] Apr 28 13:42:24.835: INFO: Created: latency-svc-5lc7w Apr 28 13:42:24.839: INFO: Got endpoints: latency-svc-5lc7w [807.880591ms] Apr 28 13:42:24.858: INFO: Created: latency-svc-gjsxz Apr 28 13:42:24.881: INFO: Got endpoints: latency-svc-gjsxz [813.755031ms] Apr 28 13:42:24.917: INFO: Created: latency-svc-cwbgw Apr 28 13:42:24.990: INFO: Got endpoints: latency-svc-cwbgw [864.848857ms] Apr 28 13:42:24.992: INFO: Created: latency-svc-8npps Apr 28 13:42:25.002: INFO: Got endpoints: latency-svc-8npps [843.926169ms] Apr 28 13:42:25.056: INFO: Created: latency-svc-mp5cj Apr 28 13:42:25.075: INFO: Got endpoints: latency-svc-mp5cj [880.551607ms] Apr 28 13:42:25.128: INFO: Created: latency-svc-c69hw Apr 28 13:42:25.131: INFO: Got endpoints: latency-svc-c69hw [878.252247ms] Apr 28 13:42:25.157: INFO: Created: latency-svc-c8tx5 Apr 28 13:42:25.172: INFO: Got endpoints: latency-svc-c8tx5 [885.467674ms] Apr 28 13:42:25.194: INFO: Created: latency-svc-499xf Apr 28 13:42:25.208: INFO: Got endpoints: latency-svc-499xf [885.668203ms] Apr 28 13:42:25.265: INFO: Created: latency-svc-882p9 Apr 28 13:42:25.268: INFO: Got endpoints: latency-svc-882p9 [874.892012ms] Apr 28 13:42:25.296: INFO: Created: latency-svc-kjmzt Apr 28 13:42:25.304: INFO: Got endpoints: latency-svc-kjmzt [831.979116ms] Apr 28 13:42:25.326: INFO: Created: latency-svc-997sd Apr 28 13:42:25.347: INFO: Got endpoints: latency-svc-997sd [791.811165ms] Apr 28 13:42:25.428: INFO: Created: latency-svc-wzj67 Apr 28 13:42:25.430: INFO: Got endpoints: latency-svc-wzj67 [819.823021ms] Apr 28 13:42:25.458: INFO: Created: latency-svc-665kp Apr 28 13:42:25.479: INFO: Got endpoints: latency-svc-665kp [782.849338ms] Apr 28 13:42:25.506: INFO: Created: latency-svc-ppz9l Apr 28 13:42:25.521: INFO: Got endpoints: latency-svc-ppz9l [806.11339ms] Apr 28 13:42:25.565: INFO: Created: latency-svc-gwjlg Apr 28 13:42:25.568: INFO: Got endpoints: latency-svc-gwjlg [800.298839ms] Apr 28 13:42:25.602: INFO: Created: latency-svc-gcxmd Apr 28 13:42:25.618: INFO: Got endpoints: latency-svc-gcxmd [778.289228ms] Apr 28 13:42:25.650: INFO: Created: latency-svc-bv42w Apr 28 13:42:25.702: INFO: Got endpoints: latency-svc-bv42w [820.633872ms] Apr 28 13:42:25.705: INFO: Created: latency-svc-vphjv Apr 28 13:42:25.713: INFO: Got endpoints: latency-svc-vphjv [723.106944ms] Apr 28 13:42:25.740: INFO: Created: latency-svc-nvrmd Apr 28 13:42:25.756: INFO: Got endpoints: latency-svc-nvrmd [753.605658ms] Apr 28 13:42:25.781: INFO: Created: latency-svc-7kjtk Apr 28 13:42:25.791: INFO: Got endpoints: latency-svc-7kjtk [715.817162ms] Apr 28 13:42:25.846: INFO: Created: latency-svc-ztc2w Apr 28 13:42:25.848: INFO: Got endpoints: latency-svc-ztc2w [717.120415ms] Apr 28 13:42:25.890: INFO: Created: latency-svc-pbnwp Apr 28 13:42:25.906: INFO: Got endpoints: latency-svc-pbnwp [734.107065ms] Apr 28 13:42:25.937: INFO: Created: latency-svc-mq8wn Apr 28 13:42:25.990: INFO: Got endpoints: latency-svc-mq8wn [782.108764ms] Apr 28 13:42:26.003: INFO: Created: latency-svc-d8jw4 Apr 28 13:42:26.014: INFO: Got endpoints: latency-svc-d8jw4 [746.008978ms] Apr 28 13:42:26.034: INFO: Created: latency-svc-2rwd7 Apr 28 13:42:26.045: INFO: Got endpoints: latency-svc-2rwd7 [740.798813ms] Apr 28 13:42:26.064: INFO: Created: latency-svc-nmj6d Apr 28 13:42:26.133: INFO: Got endpoints: latency-svc-nmj6d [786.22529ms] Apr 28 13:42:26.142: INFO: Created: latency-svc-q7p42 Apr 28 13:42:26.153: INFO: Got endpoints: latency-svc-q7p42 [722.621777ms] Apr 28 13:42:26.178: INFO: Created: latency-svc-p7n4p Apr 28 13:42:26.189: INFO: Got endpoints: latency-svc-p7n4p [710.34151ms] Apr 28 13:42:26.226: INFO: Created: latency-svc-87n9q Apr 28 13:42:26.283: INFO: Got endpoints: latency-svc-87n9q [761.700223ms] Apr 28 13:42:26.285: INFO: Created: latency-svc-4mkcn Apr 28 13:42:26.292: INFO: Got endpoints: latency-svc-4mkcn [724.084186ms] Apr 28 13:42:26.322: INFO: Created: latency-svc-bkfbh Apr 28 13:42:26.334: INFO: Got endpoints: latency-svc-bkfbh [716.47895ms] Apr 28 13:42:26.352: INFO: Created: latency-svc-9x76b Apr 28 13:42:26.364: INFO: Got endpoints: latency-svc-9x76b [662.262854ms] Apr 28 13:42:26.409: INFO: Created: latency-svc-pln28 Apr 28 13:42:26.419: INFO: Got endpoints: latency-svc-pln28 [705.987824ms] Apr 28 13:42:26.454: INFO: Created: latency-svc-rxtk5 Apr 28 13:42:26.490: INFO: Created: latency-svc-bmr5r Apr 28 13:42:26.490: INFO: Got endpoints: latency-svc-rxtk5 [734.76792ms] Apr 28 13:42:26.540: INFO: Got endpoints: latency-svc-bmr5r [749.141131ms] Apr 28 13:42:26.561: INFO: Created: latency-svc-gllqc Apr 28 13:42:26.582: INFO: Got endpoints: latency-svc-gllqc [733.683294ms] Apr 28 13:42:26.615: INFO: Created: latency-svc-4b76h Apr 28 13:42:26.630: INFO: Got endpoints: latency-svc-4b76h [724.253503ms] Apr 28 13:42:26.666: INFO: Created: latency-svc-68sxj Apr 28 13:42:26.693: INFO: Got endpoints: latency-svc-68sxj [703.624129ms] Apr 28 13:42:26.730: INFO: Created: latency-svc-5fm7x Apr 28 13:42:26.748: INFO: Got endpoints: latency-svc-5fm7x [733.212867ms] Apr 28 13:42:26.798: INFO: Created: latency-svc-fb6qf Apr 28 13:42:26.831: INFO: Got endpoints: latency-svc-fb6qf [786.497714ms] Apr 28 13:42:26.832: INFO: Created: latency-svc-q9c6f Apr 28 13:42:26.847: INFO: Got endpoints: latency-svc-q9c6f [713.872532ms] Apr 28 13:42:26.880: INFO: Created: latency-svc-vft5c Apr 28 13:42:26.895: INFO: Got endpoints: latency-svc-vft5c [742.37548ms] Apr 28 13:42:26.942: INFO: Created: latency-svc-mcfvl Apr 28 13:42:26.944: INFO: Got endpoints: latency-svc-mcfvl [754.89668ms] Apr 28 13:42:26.970: INFO: Created: latency-svc-lb54j Apr 28 13:42:26.986: INFO: Got endpoints: latency-svc-lb54j [703.120286ms] Apr 28 13:42:27.012: INFO: Created: latency-svc-z7str Apr 28 13:42:27.034: INFO: Got endpoints: latency-svc-z7str [742.395839ms] Apr 28 13:42:27.090: INFO: Created: latency-svc-4dcwc Apr 28 13:42:27.100: INFO: Got endpoints: latency-svc-4dcwc [766.123242ms] Apr 28 13:42:27.133: INFO: Created: latency-svc-6dcvc Apr 28 13:42:27.149: INFO: Got endpoints: latency-svc-6dcvc [784.371458ms] Apr 28 13:42:27.169: INFO: Created: latency-svc-sjb5g Apr 28 13:42:27.206: INFO: Got endpoints: latency-svc-sjb5g [787.18924ms] Apr 28 13:42:27.222: INFO: Created: latency-svc-s4vdv Apr 28 13:42:27.241: INFO: Got endpoints: latency-svc-s4vdv [750.289731ms] Apr 28 13:42:27.264: INFO: Created: latency-svc-w87vz Apr 28 13:42:27.276: INFO: Got endpoints: latency-svc-w87vz [735.478885ms] Apr 28 13:42:27.276: INFO: Latencies: [64.465058ms 96.34148ms 140.410051ms 219.332574ms 307.069306ms 367.244608ms 445.267633ms 546.711476ms 614.672106ms 629.837748ms 630.196948ms 647.482142ms 660.083066ms 661.01792ms 662.262854ms 665.145298ms 665.899752ms 665.904105ms 670.969348ms 672.222883ms 675.644889ms 678.98326ms 680.700329ms 681.937157ms 684.501257ms 685.441542ms 693.830617ms 703.120286ms 703.624129ms 705.530217ms 705.987824ms 706.296658ms 706.417454ms 706.959738ms 707.232078ms 707.890855ms 708.93707ms 710.022184ms 710.34151ms 713.872532ms 715.817162ms 715.829002ms 716.47895ms 717.120415ms 718.874526ms 722.621777ms 723.106944ms 724.084186ms 724.253503ms 724.258962ms 724.362189ms 724.375813ms 727.539614ms 728.39834ms 729.875735ms 730.247347ms 732.124423ms 733.212867ms 733.683294ms 734.107065ms 734.76792ms 734.94925ms 735.478885ms 735.736348ms 735.904715ms 736.000114ms 736.059781ms 736.492032ms 737.178064ms 738.303637ms 739.507862ms 740.798813ms 741.977211ms 742.360472ms 742.37548ms 742.395839ms 746.008978ms 746.103727ms 746.41723ms 749.141131ms 750.289731ms 752.107526ms 752.734881ms 753.605658ms 753.768016ms 754.89668ms 754.909466ms 755.441143ms 756.195656ms 756.493066ms 758.062319ms 760.780469ms 761.700223ms 763.250918ms 763.306746ms 763.510125ms 765.996795ms 766.123242ms 772.047394ms 772.073374ms 772.213053ms 777.461894ms 777.922274ms 778.004629ms 778.257666ms 778.289228ms 778.483247ms 779.040556ms 781.301265ms 781.643617ms 782.014533ms 782.108764ms 782.145572ms 782.849338ms 784.371458ms 784.371771ms 784.893275ms 784.993127ms 786.22529ms 786.497714ms 786.790498ms 787.082912ms 787.18924ms 788.32758ms 789.821007ms 790.481791ms 791.007168ms 791.350525ms 791.811165ms 792.114024ms 792.152633ms 793.425622ms 794.005976ms 795.189463ms 797.07433ms 797.478219ms 798.704482ms 799.234513ms 799.288148ms 800.298839ms 802.923309ms 805.602423ms 805.929535ms 806.11339ms 807.700277ms 807.880591ms 807.923825ms 808.074865ms 808.338674ms 809.419213ms 811.173026ms 811.179131ms 811.975073ms 813.755031ms 814.865432ms 814.981541ms 815.757948ms 819.823021ms 820.633872ms 821.641915ms 822.258793ms 826.360945ms 828.656657ms 830.067168ms 831.103818ms 831.979116ms 832.022031ms 832.192228ms 832.29943ms 834.690745ms 840.574277ms 842.567149ms 843.926169ms 850.947103ms 857.844095ms 858.559605ms 858.6998ms 864.685859ms 864.848857ms 873.245836ms 873.968177ms 874.892012ms 878.252247ms 878.988264ms 880.355258ms 880.551607ms 885.467674ms 885.668203ms 900.218787ms 905.707429ms 927.221893ms 929.870965ms 969.163157ms 992.94929ms 1.018561519s 1.020768172s 1.030536945s 1.055055359s 1.062887507s 1.069000822s] Apr 28 13:42:27.276: INFO: 50 %ile: 772.213053ms Apr 28 13:42:27.276: INFO: 90 %ile: 873.968177ms Apr 28 13:42:27.276: INFO: 99 %ile: 1.062887507s Apr 28 13:42:27.276: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 13:42:27.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-9607" for this suite. Apr 28 13:42:47.335: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:42:47.410: INFO: namespace svc-latency-9607 deletion completed in 20.116638219s • [SLOW TEST:33.862 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 13:42:47.411: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-d929b9da-2525-49bb-a1c2-bf705326bafb STEP: Creating a pod to test consume configMaps Apr 28 13:42:47.485: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-4715b708-fe04-406e-942a-e624a7171a80" in namespace "projected-8548" to be "success or failure" Apr 28 13:42:47.495: INFO: Pod "pod-projected-configmaps-4715b708-fe04-406e-942a-e624a7171a80": Phase="Pending", Reason="", readiness=false. Elapsed: 10.36766ms Apr 28 13:42:49.513: INFO: Pod "pod-projected-configmaps-4715b708-fe04-406e-942a-e624a7171a80": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028666853s Apr 28 13:42:51.517: INFO: Pod "pod-projected-configmaps-4715b708-fe04-406e-942a-e624a7171a80": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032532352s STEP: Saw pod success Apr 28 13:42:51.517: INFO: Pod "pod-projected-configmaps-4715b708-fe04-406e-942a-e624a7171a80" satisfied condition "success or failure" Apr 28 13:42:51.520: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-4715b708-fe04-406e-942a-e624a7171a80 container projected-configmap-volume-test: STEP: delete the pod Apr 28 13:42:51.579: INFO: Waiting for pod pod-projected-configmaps-4715b708-fe04-406e-942a-e624a7171a80 to disappear Apr 28 13:42:51.586: INFO: Pod pod-projected-configmaps-4715b708-fe04-406e-942a-e624a7171a80 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 13:42:51.586: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8548" for this suite. Apr 28 13:42:57.600: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:42:57.700: INFO: namespace projected-8548 deletion completed in 6.110144513s • [SLOW TEST:10.289 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 13:42:57.700: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Apr 28 13:43:07.813: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9705 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 28 13:43:07.813: INFO: >>> kubeConfig: /root/.kube/config I0428 13:43:07.847217 6 log.go:172] (0xc0017d48f0) (0xc00166cfa0) Create stream I0428 13:43:07.847253 6 log.go:172] (0xc0017d48f0) (0xc00166cfa0) Stream added, broadcasting: 1 I0428 13:43:07.849059 6 log.go:172] (0xc0017d48f0) Reply frame received for 1 I0428 13:43:07.849087 6 log.go:172] (0xc0017d48f0) (0xc002ea9220) Create stream I0428 13:43:07.849098 6 log.go:172] (0xc0017d48f0) (0xc002ea9220) Stream added, broadcasting: 3 I0428 13:43:07.850405 6 log.go:172] (0xc0017d48f0) Reply frame received for 3 I0428 13:43:07.850453 6 log.go:172] (0xc0017d48f0) (0xc0030e4960) Create stream I0428 13:43:07.850467 6 log.go:172] (0xc0017d48f0) (0xc0030e4960) Stream added, broadcasting: 5 I0428 13:43:07.851192 6 log.go:172] (0xc0017d48f0) Reply frame received for 5 I0428 13:43:07.898926 6 log.go:172] (0xc0017d48f0) Data frame received for 5 I0428 13:43:07.898980 6 log.go:172] (0xc0030e4960) (5) Data frame handling I0428 13:43:07.899022 6 log.go:172] (0xc0017d48f0) Data frame received for 3 I0428 13:43:07.899041 6 log.go:172] (0xc002ea9220) (3) Data frame handling I0428 13:43:07.899071 6 log.go:172] (0xc002ea9220) (3) Data frame sent I0428 13:43:07.899089 6 log.go:172] (0xc0017d48f0) Data frame received for 3 I0428 13:43:07.899105 6 log.go:172] (0xc002ea9220) (3) Data frame handling I0428 13:43:07.901842 6 log.go:172] (0xc0017d48f0) Data frame received for 1 I0428 13:43:07.901883 6 log.go:172] (0xc00166cfa0) (1) Data frame handling I0428 13:43:07.901906 6 log.go:172] (0xc00166cfa0) (1) Data frame sent I0428 13:43:07.901928 6 log.go:172] (0xc0017d48f0) (0xc00166cfa0) Stream removed, broadcasting: 1 I0428 13:43:07.901947 6 log.go:172] (0xc0017d48f0) Go away received I0428 13:43:07.902045 6 log.go:172] (0xc0017d48f0) (0xc00166cfa0) Stream removed, broadcasting: 1 I0428 13:43:07.902062 6 log.go:172] (0xc0017d48f0) (0xc002ea9220) Stream removed, broadcasting: 3 I0428 13:43:07.902069 6 log.go:172] (0xc0017d48f0) (0xc0030e4960) Stream removed, broadcasting: 5 Apr 28 13:43:07.902: INFO: Exec stderr: "" Apr 28 13:43:07.902: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9705 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 28 13:43:07.902: INFO: >>> kubeConfig: /root/.kube/config I0428 13:43:07.934172 6 log.go:172] (0xc002734e70) (0xc002ea9540) Create stream I0428 13:43:07.934199 6 log.go:172] (0xc002734e70) (0xc002ea9540) Stream added, broadcasting: 1 I0428 13:43:07.936685 6 log.go:172] (0xc002734e70) Reply frame received for 1 I0428 13:43:07.936719 6 log.go:172] (0xc002734e70) (0xc0022c2000) Create stream I0428 13:43:07.936730 6 log.go:172] (0xc002734e70) (0xc0022c2000) Stream added, broadcasting: 3 I0428 13:43:07.937867 6 log.go:172] (0xc002734e70) Reply frame received for 3 I0428 13:43:07.937906 6 log.go:172] (0xc002734e70) (0xc00166d040) Create stream I0428 13:43:07.937926 6 log.go:172] (0xc002734e70) (0xc00166d040) Stream added, broadcasting: 5 I0428 13:43:07.938911 6 log.go:172] (0xc002734e70) Reply frame received for 5 I0428 13:43:07.991609 6 log.go:172] (0xc002734e70) Data frame received for 5 I0428 13:43:07.991639 6 log.go:172] (0xc00166d040) (5) Data frame handling I0428 13:43:07.991661 6 log.go:172] (0xc002734e70) Data frame received for 3 I0428 13:43:07.991674 6 log.go:172] (0xc0022c2000) (3) Data frame handling I0428 13:43:07.991683 6 log.go:172] (0xc0022c2000) (3) Data frame sent I0428 13:43:07.991692 6 log.go:172] (0xc002734e70) Data frame received for 3 I0428 13:43:07.991698 6 log.go:172] (0xc0022c2000) (3) Data frame handling I0428 13:43:07.993320 6 log.go:172] (0xc002734e70) Data frame received for 1 I0428 13:43:07.993363 6 log.go:172] (0xc002ea9540) (1) Data frame handling I0428 13:43:07.993390 6 log.go:172] (0xc002ea9540) (1) Data frame sent I0428 13:43:07.993421 6 log.go:172] (0xc002734e70) (0xc002ea9540) Stream removed, broadcasting: 1 I0428 13:43:07.993458 6 log.go:172] (0xc002734e70) Go away received I0428 13:43:07.993606 6 log.go:172] (0xc002734e70) (0xc002ea9540) Stream removed, broadcasting: 1 I0428 13:43:07.993640 6 log.go:172] (0xc002734e70) (0xc0022c2000) Stream removed, broadcasting: 3 I0428 13:43:07.993708 6 log.go:172] (0xc002734e70) (0xc00166d040) Stream removed, broadcasting: 5 Apr 28 13:43:07.993: INFO: Exec stderr: "" Apr 28 13:43:07.993: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9705 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 28 13:43:07.993: INFO: >>> kubeConfig: /root/.kube/config I0428 13:43:08.031584 6 log.go:172] (0xc002735760) (0xc002ea9720) Create stream I0428 13:43:08.031611 6 log.go:172] (0xc002735760) (0xc002ea9720) Stream added, broadcasting: 1 I0428 13:43:08.035151 6 log.go:172] (0xc002735760) Reply frame received for 1 I0428 13:43:08.035178 6 log.go:172] (0xc002735760) (0xc002974dc0) Create stream I0428 13:43:08.035186 6 log.go:172] (0xc002735760) (0xc002974dc0) Stream added, broadcasting: 3 I0428 13:43:08.036151 6 log.go:172] (0xc002735760) Reply frame received for 3 I0428 13:43:08.036197 6 log.go:172] (0xc002735760) (0xc002974e60) Create stream I0428 13:43:08.036211 6 log.go:172] (0xc002735760) (0xc002974e60) Stream added, broadcasting: 5 I0428 13:43:08.037240 6 log.go:172] (0xc002735760) Reply frame received for 5 I0428 13:43:08.092536 6 log.go:172] (0xc002735760) Data frame received for 5 I0428 13:43:08.092564 6 log.go:172] (0xc002974e60) (5) Data frame handling I0428 13:43:08.092594 6 log.go:172] (0xc002735760) Data frame received for 3 I0428 13:43:08.092630 6 log.go:172] (0xc002974dc0) (3) Data frame handling I0428 13:43:08.092652 6 log.go:172] (0xc002974dc0) (3) Data frame sent I0428 13:43:08.092663 6 log.go:172] (0xc002735760) Data frame received for 3 I0428 13:43:08.092670 6 log.go:172] (0xc002974dc0) (3) Data frame handling I0428 13:43:08.093809 6 log.go:172] (0xc002735760) Data frame received for 1 I0428 13:43:08.093845 6 log.go:172] (0xc002ea9720) (1) Data frame handling I0428 13:43:08.093873 6 log.go:172] (0xc002ea9720) (1) Data frame sent I0428 13:43:08.093905 6 log.go:172] (0xc002735760) (0xc002ea9720) Stream removed, broadcasting: 1 I0428 13:43:08.093930 6 log.go:172] (0xc002735760) Go away received I0428 13:43:08.094000 6 log.go:172] (0xc002735760) (0xc002ea9720) Stream removed, broadcasting: 1 I0428 13:43:08.094023 6 log.go:172] (0xc002735760) (0xc002974dc0) Stream removed, broadcasting: 3 I0428 13:43:08.094032 6 log.go:172] (0xc002735760) (0xc002974e60) Stream removed, broadcasting: 5 Apr 28 13:43:08.094: INFO: Exec stderr: "" Apr 28 13:43:08.094: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9705 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 28 13:43:08.094: INFO: >>> kubeConfig: /root/.kube/config I0428 13:43:08.140346 6 log.go:172] (0xc0015eefd0) (0xc0030e4fa0) Create stream I0428 13:43:08.140372 6 log.go:172] (0xc0015eefd0) (0xc0030e4fa0) Stream added, broadcasting: 1 I0428 13:43:08.144028 6 log.go:172] (0xc0015eefd0) Reply frame received for 1 I0428 13:43:08.144105 6 log.go:172] (0xc0015eefd0) (0xc0022c20a0) Create stream I0428 13:43:08.144123 6 log.go:172] (0xc0015eefd0) (0xc0022c20a0) Stream added, broadcasting: 3 I0428 13:43:08.145373 6 log.go:172] (0xc0015eefd0) Reply frame received for 3 I0428 13:43:08.145424 6 log.go:172] (0xc0015eefd0) (0xc002974f00) Create stream I0428 13:43:08.145439 6 log.go:172] (0xc0015eefd0) (0xc002974f00) Stream added, broadcasting: 5 I0428 13:43:08.146755 6 log.go:172] (0xc0015eefd0) Reply frame received for 5 I0428 13:43:08.203123 6 log.go:172] (0xc0015eefd0) Data frame received for 5 I0428 13:43:08.203263 6 log.go:172] (0xc002974f00) (5) Data frame handling I0428 13:43:08.203302 6 log.go:172] (0xc0015eefd0) Data frame received for 3 I0428 13:43:08.203348 6 log.go:172] (0xc0022c20a0) (3) Data frame handling I0428 13:43:08.203387 6 log.go:172] (0xc0022c20a0) (3) Data frame sent I0428 13:43:08.203417 6 log.go:172] (0xc0015eefd0) Data frame received for 3 I0428 13:43:08.203431 6 log.go:172] (0xc0022c20a0) (3) Data frame handling I0428 13:43:08.205106 6 log.go:172] (0xc0015eefd0) Data frame received for 1 I0428 13:43:08.205303 6 log.go:172] (0xc0030e4fa0) (1) Data frame handling I0428 13:43:08.205331 6 log.go:172] (0xc0030e4fa0) (1) Data frame sent I0428 13:43:08.205355 6 log.go:172] (0xc0015eefd0) (0xc0030e4fa0) Stream removed, broadcasting: 1 I0428 13:43:08.205521 6 log.go:172] (0xc0015eefd0) (0xc0030e4fa0) Stream removed, broadcasting: 1 I0428 13:43:08.205549 6 log.go:172] (0xc0015eefd0) (0xc0022c20a0) Stream removed, broadcasting: 3 I0428 13:43:08.205750 6 log.go:172] (0xc0015eefd0) (0xc002974f00) Stream removed, broadcasting: 5 Apr 28 13:43:08.205: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Apr 28 13:43:08.205: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9705 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 28 13:43:08.206: INFO: >>> kubeConfig: /root/.kube/config I0428 13:43:08.209374 6 log.go:172] (0xc0015eefd0) Go away received I0428 13:43:08.241931 6 log.go:172] (0xc001774dc0) (0xc002975040) Create stream I0428 13:43:08.241956 6 log.go:172] (0xc001774dc0) (0xc002975040) Stream added, broadcasting: 1 I0428 13:43:08.244812 6 log.go:172] (0xc001774dc0) Reply frame received for 1 I0428 13:43:08.244849 6 log.go:172] (0xc001774dc0) (0xc0022c2500) Create stream I0428 13:43:08.244862 6 log.go:172] (0xc001774dc0) (0xc0022c2500) Stream added, broadcasting: 3 I0428 13:43:08.246154 6 log.go:172] (0xc001774dc0) Reply frame received for 3 I0428 13:43:08.246205 6 log.go:172] (0xc001774dc0) (0xc0030e50e0) Create stream I0428 13:43:08.246220 6 log.go:172] (0xc001774dc0) (0xc0030e50e0) Stream added, broadcasting: 5 I0428 13:43:08.247312 6 log.go:172] (0xc001774dc0) Reply frame received for 5 I0428 13:43:08.302080 6 log.go:172] (0xc001774dc0) Data frame received for 5 I0428 13:43:08.302122 6 log.go:172] (0xc0030e50e0) (5) Data frame handling I0428 13:43:08.302152 6 log.go:172] (0xc001774dc0) Data frame received for 3 I0428 13:43:08.302181 6 log.go:172] (0xc0022c2500) (3) Data frame handling I0428 13:43:08.302205 6 log.go:172] (0xc0022c2500) (3) Data frame sent I0428 13:43:08.302220 6 log.go:172] (0xc001774dc0) Data frame received for 3 I0428 13:43:08.302229 6 log.go:172] (0xc0022c2500) (3) Data frame handling I0428 13:43:08.303998 6 log.go:172] (0xc001774dc0) Data frame received for 1 I0428 13:43:08.304128 6 log.go:172] (0xc002975040) (1) Data frame handling I0428 13:43:08.304235 6 log.go:172] (0xc002975040) (1) Data frame sent I0428 13:43:08.304261 6 log.go:172] (0xc001774dc0) (0xc002975040) Stream removed, broadcasting: 1 I0428 13:43:08.304420 6 log.go:172] (0xc001774dc0) Go away received I0428 13:43:08.304501 6 log.go:172] (0xc001774dc0) (0xc002975040) Stream removed, broadcasting: 1 I0428 13:43:08.304578 6 log.go:172] (0xc001774dc0) (0xc0022c2500) Stream removed, broadcasting: 3 I0428 13:43:08.304603 6 log.go:172] (0xc001774dc0) (0xc0030e50e0) Stream removed, broadcasting: 5 Apr 28 13:43:08.304: INFO: Exec stderr: "" Apr 28 13:43:08.304: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9705 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 28 13:43:08.304: INFO: >>> kubeConfig: /root/.kube/config I0428 13:43:08.334749 6 log.go:172] (0xc0025ea000) (0xc0030e5360) Create stream I0428 13:43:08.334781 6 log.go:172] (0xc0025ea000) (0xc0030e5360) Stream added, broadcasting: 1 I0428 13:43:08.348033 6 log.go:172] (0xc0025ea000) Reply frame received for 1 I0428 13:43:08.348086 6 log.go:172] (0xc0025ea000) (0xc002ea9860) Create stream I0428 13:43:08.348136 6 log.go:172] (0xc0025ea000) (0xc002ea9860) Stream added, broadcasting: 3 I0428 13:43:08.349294 6 log.go:172] (0xc0025ea000) Reply frame received for 3 I0428 13:43:08.349331 6 log.go:172] (0xc0025ea000) (0xc0022c2640) Create stream I0428 13:43:08.349347 6 log.go:172] (0xc0025ea000) (0xc0022c2640) Stream added, broadcasting: 5 I0428 13:43:08.350196 6 log.go:172] (0xc0025ea000) Reply frame received for 5 I0428 13:43:08.409619 6 log.go:172] (0xc0025ea000) Data frame received for 3 I0428 13:43:08.409674 6 log.go:172] (0xc002ea9860) (3) Data frame handling I0428 13:43:08.409715 6 log.go:172] (0xc002ea9860) (3) Data frame sent I0428 13:43:08.409745 6 log.go:172] (0xc0025ea000) Data frame received for 3 I0428 13:43:08.409772 6 log.go:172] (0xc002ea9860) (3) Data frame handling I0428 13:43:08.409892 6 log.go:172] (0xc0025ea000) Data frame received for 5 I0428 13:43:08.409973 6 log.go:172] (0xc0022c2640) (5) Data frame handling I0428 13:43:08.411949 6 log.go:172] (0xc0025ea000) Data frame received for 1 I0428 13:43:08.411983 6 log.go:172] (0xc0030e5360) (1) Data frame handling I0428 13:43:08.412116 6 log.go:172] (0xc0030e5360) (1) Data frame sent I0428 13:43:08.412160 6 log.go:172] (0xc0025ea000) (0xc0030e5360) Stream removed, broadcasting: 1 I0428 13:43:08.412256 6 log.go:172] (0xc0025ea000) Go away received I0428 13:43:08.412307 6 log.go:172] (0xc0025ea000) (0xc0030e5360) Stream removed, broadcasting: 1 I0428 13:43:08.412341 6 log.go:172] (0xc0025ea000) (0xc002ea9860) Stream removed, broadcasting: 3 I0428 13:43:08.412366 6 log.go:172] (0xc0025ea000) (0xc0022c2640) Stream removed, broadcasting: 5 Apr 28 13:43:08.412: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Apr 28 13:43:08.412: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9705 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 28 13:43:08.412: INFO: >>> kubeConfig: /root/.kube/config I0428 13:43:08.444775 6 log.go:172] (0xc0030bc580) (0xc002ea9b80) Create stream I0428 13:43:08.444808 6 log.go:172] (0xc0030bc580) (0xc002ea9b80) Stream added, broadcasting: 1 I0428 13:43:08.447429 6 log.go:172] (0xc0030bc580) Reply frame received for 1 I0428 13:43:08.447479 6 log.go:172] (0xc0030bc580) (0xc002975220) Create stream I0428 13:43:08.447494 6 log.go:172] (0xc0030bc580) (0xc002975220) Stream added, broadcasting: 3 I0428 13:43:08.448681 6 log.go:172] (0xc0030bc580) Reply frame received for 3 I0428 13:43:08.448721 6 log.go:172] (0xc0030bc580) (0xc0022c28c0) Create stream I0428 13:43:08.448741 6 log.go:172] (0xc0030bc580) (0xc0022c28c0) Stream added, broadcasting: 5 I0428 13:43:08.449894 6 log.go:172] (0xc0030bc580) Reply frame received for 5 I0428 13:43:08.514718 6 log.go:172] (0xc0030bc580) Data frame received for 5 I0428 13:43:08.514771 6 log.go:172] (0xc0022c28c0) (5) Data frame handling I0428 13:43:08.514804 6 log.go:172] (0xc0030bc580) Data frame received for 3 I0428 13:43:08.514822 6 log.go:172] (0xc002975220) (3) Data frame handling I0428 13:43:08.514833 6 log.go:172] (0xc002975220) (3) Data frame sent I0428 13:43:08.514849 6 log.go:172] (0xc0030bc580) Data frame received for 3 I0428 13:43:08.514874 6 log.go:172] (0xc002975220) (3) Data frame handling I0428 13:43:08.516398 6 log.go:172] (0xc0030bc580) Data frame received for 1 I0428 13:43:08.516434 6 log.go:172] (0xc002ea9b80) (1) Data frame handling I0428 13:43:08.516469 6 log.go:172] (0xc002ea9b80) (1) Data frame sent I0428 13:43:08.516500 6 log.go:172] (0xc0030bc580) (0xc002ea9b80) Stream removed, broadcasting: 1 I0428 13:43:08.516531 6 log.go:172] (0xc0030bc580) Go away received I0428 13:43:08.516664 6 log.go:172] (0xc0030bc580) (0xc002ea9b80) Stream removed, broadcasting: 1 I0428 13:43:08.516697 6 log.go:172] (0xc0030bc580) (0xc002975220) Stream removed, broadcasting: 3 I0428 13:43:08.516839 6 log.go:172] (0xc0030bc580) (0xc0022c28c0) Stream removed, broadcasting: 5 Apr 28 13:43:08.516: INFO: Exec stderr: "" Apr 28 13:43:08.516: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9705 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 28 13:43:08.516: INFO: >>> kubeConfig: /root/.kube/config I0428 13:43:08.544703 6 log.go:172] (0xc002acc0b0) (0xc002975540) Create stream I0428 13:43:08.544731 6 log.go:172] (0xc002acc0b0) (0xc002975540) Stream added, broadcasting: 1 I0428 13:43:08.547257 6 log.go:172] (0xc002acc0b0) Reply frame received for 1 I0428 13:43:08.547314 6 log.go:172] (0xc002acc0b0) (0xc0030e5400) Create stream I0428 13:43:08.547345 6 log.go:172] (0xc002acc0b0) (0xc0030e5400) Stream added, broadcasting: 3 I0428 13:43:08.548383 6 log.go:172] (0xc002acc0b0) Reply frame received for 3 I0428 13:43:08.548419 6 log.go:172] (0xc002acc0b0) (0xc0022c2960) Create stream I0428 13:43:08.548432 6 log.go:172] (0xc002acc0b0) (0xc0022c2960) Stream added, broadcasting: 5 I0428 13:43:08.549371 6 log.go:172] (0xc002acc0b0) Reply frame received for 5 I0428 13:43:08.628579 6 log.go:172] (0xc002acc0b0) Data frame received for 5 I0428 13:43:08.628604 6 log.go:172] (0xc0022c2960) (5) Data frame handling I0428 13:43:08.628664 6 log.go:172] (0xc002acc0b0) Data frame received for 3 I0428 13:43:08.628693 6 log.go:172] (0xc0030e5400) (3) Data frame handling I0428 13:43:08.628708 6 log.go:172] (0xc0030e5400) (3) Data frame sent I0428 13:43:08.628716 6 log.go:172] (0xc002acc0b0) Data frame received for 3 I0428 13:43:08.628723 6 log.go:172] (0xc0030e5400) (3) Data frame handling I0428 13:43:08.629869 6 log.go:172] (0xc002acc0b0) Data frame received for 1 I0428 13:43:08.629896 6 log.go:172] (0xc002975540) (1) Data frame handling I0428 13:43:08.629925 6 log.go:172] (0xc002975540) (1) Data frame sent I0428 13:43:08.629945 6 log.go:172] (0xc002acc0b0) (0xc002975540) Stream removed, broadcasting: 1 I0428 13:43:08.629975 6 log.go:172] (0xc002acc0b0) Go away received I0428 13:43:08.630091 6 log.go:172] (0xc002acc0b0) (0xc002975540) Stream removed, broadcasting: 1 I0428 13:43:08.630132 6 log.go:172] (0xc002acc0b0) (0xc0030e5400) Stream removed, broadcasting: 3 I0428 13:43:08.630162 6 log.go:172] (0xc002acc0b0) (0xc0022c2960) Stream removed, broadcasting: 5 Apr 28 13:43:08.630: INFO: Exec stderr: "" Apr 28 13:43:08.630: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9705 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 28 13:43:08.630: INFO: >>> kubeConfig: /root/.kube/config I0428 13:43:08.656629 6 log.go:172] (0xc0002a56b0) (0xc00155e000) Create stream I0428 13:43:08.656651 6 log.go:172] (0xc0002a56b0) (0xc00155e000) Stream added, broadcasting: 1 I0428 13:43:08.658268 6 log.go:172] (0xc0002a56b0) Reply frame received for 1 I0428 13:43:08.658311 6 log.go:172] (0xc0002a56b0) (0xc00155e0a0) Create stream I0428 13:43:08.658322 6 log.go:172] (0xc0002a56b0) (0xc00155e0a0) Stream added, broadcasting: 3 I0428 13:43:08.659025 6 log.go:172] (0xc0002a56b0) Reply frame received for 3 I0428 13:43:08.659060 6 log.go:172] (0xc0002a56b0) (0xc00155e1e0) Create stream I0428 13:43:08.659072 6 log.go:172] (0xc0002a56b0) (0xc00155e1e0) Stream added, broadcasting: 5 I0428 13:43:08.659804 6 log.go:172] (0xc0002a56b0) Reply frame received for 5 I0428 13:43:08.713592 6 log.go:172] (0xc0002a56b0) Data frame received for 3 I0428 13:43:08.713641 6 log.go:172] (0xc00155e0a0) (3) Data frame handling I0428 13:43:08.713665 6 log.go:172] (0xc00155e0a0) (3) Data frame sent I0428 13:43:08.713734 6 log.go:172] (0xc0002a56b0) Data frame received for 5 I0428 13:43:08.713768 6 log.go:172] (0xc00155e1e0) (5) Data frame handling I0428 13:43:08.713788 6 log.go:172] (0xc0002a56b0) Data frame received for 3 I0428 13:43:08.713794 6 log.go:172] (0xc00155e0a0) (3) Data frame handling I0428 13:43:08.715113 6 log.go:172] (0xc0002a56b0) Data frame received for 1 I0428 13:43:08.715130 6 log.go:172] (0xc00155e000) (1) Data frame handling I0428 13:43:08.715147 6 log.go:172] (0xc00155e000) (1) Data frame sent I0428 13:43:08.715179 6 log.go:172] (0xc0002a56b0) (0xc00155e000) Stream removed, broadcasting: 1 I0428 13:43:08.715198 6 log.go:172] (0xc0002a56b0) Go away received I0428 13:43:08.715340 6 log.go:172] (0xc0002a56b0) (0xc00155e000) Stream removed, broadcasting: 1 I0428 13:43:08.715362 6 log.go:172] (0xc0002a56b0) (0xc00155e0a0) Stream removed, broadcasting: 3 I0428 13:43:08.715372 6 log.go:172] (0xc0002a56b0) (0xc00155e1e0) Stream removed, broadcasting: 5 Apr 28 13:43:08.715: INFO: Exec stderr: "" Apr 28 13:43:08.715: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9705 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 28 13:43:08.715: INFO: >>> kubeConfig: /root/.kube/config I0428 13:43:08.746544 6 log.go:172] (0xc000b9f130) (0xc001202d20) Create stream I0428 13:43:08.746601 6 log.go:172] (0xc000b9f130) (0xc001202d20) Stream added, broadcasting: 1 I0428 13:43:08.748472 6 log.go:172] (0xc000b9f130) Reply frame received for 1 I0428 13:43:08.748504 6 log.go:172] (0xc000b9f130) (0xc0019d00a0) Create stream I0428 13:43:08.748515 6 log.go:172] (0xc000b9f130) (0xc0019d00a0) Stream added, broadcasting: 3 I0428 13:43:08.749673 6 log.go:172] (0xc000b9f130) Reply frame received for 3 I0428 13:43:08.749714 6 log.go:172] (0xc000b9f130) (0xc002666000) Create stream I0428 13:43:08.749731 6 log.go:172] (0xc000b9f130) (0xc002666000) Stream added, broadcasting: 5 I0428 13:43:08.750553 6 log.go:172] (0xc000b9f130) Reply frame received for 5 I0428 13:43:08.812869 6 log.go:172] (0xc000b9f130) Data frame received for 3 I0428 13:43:08.812917 6 log.go:172] (0xc0019d00a0) (3) Data frame handling I0428 13:43:08.812944 6 log.go:172] (0xc0019d00a0) (3) Data frame sent I0428 13:43:08.812952 6 log.go:172] (0xc000b9f130) Data frame received for 3 I0428 13:43:08.812964 6 log.go:172] (0xc0019d00a0) (3) Data frame handling I0428 13:43:08.812992 6 log.go:172] (0xc000b9f130) Data frame received for 5 I0428 13:43:08.813015 6 log.go:172] (0xc002666000) (5) Data frame handling I0428 13:43:08.814341 6 log.go:172] (0xc000b9f130) Data frame received for 1 I0428 13:43:08.814364 6 log.go:172] (0xc001202d20) (1) Data frame handling I0428 13:43:08.814382 6 log.go:172] (0xc001202d20) (1) Data frame sent I0428 13:43:08.814396 6 log.go:172] (0xc000b9f130) (0xc001202d20) Stream removed, broadcasting: 1 I0428 13:43:08.814416 6 log.go:172] (0xc000b9f130) Go away received I0428 13:43:08.814548 6 log.go:172] (0xc000b9f130) (0xc001202d20) Stream removed, broadcasting: 1 I0428 13:43:08.814569 6 log.go:172] (0xc000b9f130) (0xc0019d00a0) Stream removed, broadcasting: 3 I0428 13:43:08.814582 6 log.go:172] (0xc000b9f130) (0xc002666000) Stream removed, broadcasting: 5 Apr 28 13:43:08.814: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 13:43:08.814: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-9705" for this suite. Apr 28 13:43:54.865: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:43:54.935: INFO: namespace e2e-kubelet-etc-hosts-9705 deletion completed in 46.116741982s • [SLOW TEST:57.235 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 13:43:54.935: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating Redis RC Apr 28 13:43:55.003: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1343' Apr 28 13:43:58.318: INFO: stderr: "" Apr 28 13:43:58.318: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Apr 28 13:43:59.323: INFO: Selector matched 1 pods for map[app:redis] Apr 28 13:43:59.323: INFO: Found 0 / 1 Apr 28 13:44:00.322: INFO: Selector matched 1 pods for map[app:redis] Apr 28 13:44:00.322: INFO: Found 0 / 1 Apr 28 13:44:01.323: INFO: Selector matched 1 pods for map[app:redis] Apr 28 13:44:01.323: INFO: Found 1 / 1 Apr 28 13:44:01.323: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Apr 28 13:44:01.327: INFO: Selector matched 1 pods for map[app:redis] Apr 28 13:44:01.327: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Apr 28 13:44:01.327: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-z275v --namespace=kubectl-1343 -p {"metadata":{"annotations":{"x":"y"}}}' Apr 28 13:44:01.435: INFO: stderr: "" Apr 28 13:44:01.435: INFO: stdout: "pod/redis-master-z275v patched\n" STEP: checking annotations Apr 28 13:44:01.458: INFO: Selector matched 1 pods for map[app:redis] Apr 28 13:44:01.458: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 13:44:01.458: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1343" for this suite. Apr 28 13:44:23.515: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:44:23.590: INFO: namespace kubectl-1343 deletion completed in 22.127857997s • [SLOW TEST:28.655 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 13:44:23.590: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Apr 28 13:44:28.179: INFO: Successfully updated pod "pod-update-a13e54c3-03ce-49f0-9000-2f5243c01fa1" STEP: verifying the updated pod is in kubernetes Apr 28 13:44:28.194: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 13:44:28.194: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4833" for this suite. Apr 28 13:44:48.226: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:44:48.301: INFO: namespace pods-4833 deletion completed in 20.104079274s • [SLOW TEST:24.711 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 13:44:48.302: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 28 13:44:48.416: INFO: Create a RollingUpdate DaemonSet Apr 28 13:44:48.420: INFO: Check that daemon pods launch on every node of the cluster Apr 28 13:44:48.427: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 13:44:48.448: INFO: Number of nodes with available pods: 0 Apr 28 13:44:48.448: INFO: Node iruya-worker is running more than one daemon pod Apr 28 13:44:49.453: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 13:44:49.456: INFO: Number of nodes with available pods: 0 Apr 28 13:44:49.456: INFO: Node iruya-worker is running more than one daemon pod Apr 28 13:44:50.452: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 13:44:50.455: INFO: Number of nodes with available pods: 0 Apr 28 13:44:50.455: INFO: Node iruya-worker is running more than one daemon pod Apr 28 13:44:51.453: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 13:44:51.457: INFO: Number of nodes with available pods: 0 Apr 28 13:44:51.458: INFO: Node iruya-worker is running more than one daemon pod Apr 28 13:44:52.466: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 13:44:52.469: INFO: Number of nodes with available pods: 2 Apr 28 13:44:52.469: INFO: Number of running nodes: 2, number of available pods: 2 Apr 28 13:44:52.469: INFO: Update the DaemonSet to trigger a rollout Apr 28 13:44:52.476: INFO: Updating DaemonSet daemon-set Apr 28 13:44:56.503: INFO: Roll back the DaemonSet before rollout is complete Apr 28 13:44:56.509: INFO: Updating DaemonSet daemon-set Apr 28 13:44:56.509: INFO: Make sure DaemonSet rollback is complete Apr 28 13:44:56.516: INFO: Wrong image for pod: daemon-set-9hjvz. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Apr 28 13:44:56.516: INFO: Pod daemon-set-9hjvz is not available Apr 28 13:44:56.538: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 13:44:57.542: INFO: Wrong image for pod: daemon-set-9hjvz. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Apr 28 13:44:57.542: INFO: Pod daemon-set-9hjvz is not available Apr 28 13:44:57.546: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 13:44:58.543: INFO: Wrong image for pod: daemon-set-9hjvz. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Apr 28 13:44:58.543: INFO: Pod daemon-set-9hjvz is not available Apr 28 13:44:58.547: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 28 13:44:59.543: INFO: Pod daemon-set-d6whf is not available Apr 28 13:44:59.548: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1216, will wait for the garbage collector to delete the pods Apr 28 13:44:59.614: INFO: Deleting DaemonSet.extensions daemon-set took: 6.640772ms Apr 28 13:44:59.914: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.234294ms Apr 28 13:45:12.217: INFO: Number of nodes with available pods: 0 Apr 28 13:45:12.217: INFO: Number of running nodes: 0, number of available pods: 0 Apr 28 13:45:12.219: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1216/daemonsets","resourceVersion":"7903810"},"items":null} Apr 28 13:45:12.221: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1216/pods","resourceVersion":"7903810"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 13:45:12.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-1216" for this suite. Apr 28 13:45:18.245: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:45:18.322: INFO: namespace daemonsets-1216 deletion completed in 6.087609849s • [SLOW TEST:30.021 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 13:45:18.322: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on node default medium Apr 28 13:45:18.381: INFO: Waiting up to 5m0s for pod "pod-b1007b31-6def-433e-a929-a3eb8f6116e5" in namespace "emptydir-565" to be "success or failure" Apr 28 13:45:18.384: INFO: Pod "pod-b1007b31-6def-433e-a929-a3eb8f6116e5": Phase="Pending", Reason="", readiness=false. Elapsed: 3.693382ms Apr 28 13:45:20.389: INFO: Pod "pod-b1007b31-6def-433e-a929-a3eb8f6116e5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00829383s Apr 28 13:45:22.393: INFO: Pod "pod-b1007b31-6def-433e-a929-a3eb8f6116e5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012742948s STEP: Saw pod success Apr 28 13:45:22.394: INFO: Pod "pod-b1007b31-6def-433e-a929-a3eb8f6116e5" satisfied condition "success or failure" Apr 28 13:45:22.397: INFO: Trying to get logs from node iruya-worker pod pod-b1007b31-6def-433e-a929-a3eb8f6116e5 container test-container: STEP: delete the pod Apr 28 13:45:22.416: INFO: Waiting for pod pod-b1007b31-6def-433e-a929-a3eb8f6116e5 to disappear Apr 28 13:45:22.420: INFO: Pod pod-b1007b31-6def-433e-a929-a3eb8f6116e5 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 13:45:22.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-565" for this suite. Apr 28 13:45:28.437: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:45:28.524: INFO: namespace emptydir-565 deletion completed in 6.100588643s • [SLOW TEST:10.202 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 13:45:28.525: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3892.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-3892.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3892.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-3892.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-3892.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-3892.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-3892.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-3892.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-3892.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-3892.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-3892.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-3892.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3892.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 80.145.110.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.110.145.80_udp@PTR;check="$$(dig +tcp +noall +answer +search 80.145.110.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.110.145.80_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3892.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-3892.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3892.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-3892.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-3892.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-3892.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-3892.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-3892.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-3892.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-3892.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-3892.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-3892.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3892.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 80.145.110.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.110.145.80_udp@PTR;check="$$(dig +tcp +noall +answer +search 80.145.110.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.110.145.80_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 28 13:45:34.759: INFO: Unable to read wheezy_udp@dns-test-service.dns-3892.svc.cluster.local from pod dns-3892/dns-test-2d574cb9-2043-4c30-9145-acd51858c2f8: the server could not find the requested resource (get pods dns-test-2d574cb9-2043-4c30-9145-acd51858c2f8) Apr 28 13:45:34.763: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3892.svc.cluster.local from pod dns-3892/dns-test-2d574cb9-2043-4c30-9145-acd51858c2f8: the server could not find the requested resource (get pods dns-test-2d574cb9-2043-4c30-9145-acd51858c2f8) Apr 28 13:45:34.767: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3892.svc.cluster.local from pod dns-3892/dns-test-2d574cb9-2043-4c30-9145-acd51858c2f8: the server could not find the requested resource (get pods dns-test-2d574cb9-2043-4c30-9145-acd51858c2f8) Apr 28 13:45:34.770: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3892.svc.cluster.local from pod dns-3892/dns-test-2d574cb9-2043-4c30-9145-acd51858c2f8: the server could not find the requested resource (get pods dns-test-2d574cb9-2043-4c30-9145-acd51858c2f8) Apr 28 13:45:34.792: INFO: Unable to read jessie_udp@dns-test-service.dns-3892.svc.cluster.local from pod dns-3892/dns-test-2d574cb9-2043-4c30-9145-acd51858c2f8: the server could not find the requested resource (get pods dns-test-2d574cb9-2043-4c30-9145-acd51858c2f8) Apr 28 13:45:34.795: INFO: Unable to read jessie_tcp@dns-test-service.dns-3892.svc.cluster.local from pod dns-3892/dns-test-2d574cb9-2043-4c30-9145-acd51858c2f8: the server could not find the requested resource (get pods dns-test-2d574cb9-2043-4c30-9145-acd51858c2f8) Apr 28 13:45:34.798: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3892.svc.cluster.local from pod dns-3892/dns-test-2d574cb9-2043-4c30-9145-acd51858c2f8: the server could not find the requested resource (get pods dns-test-2d574cb9-2043-4c30-9145-acd51858c2f8) Apr 28 13:45:34.802: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3892.svc.cluster.local from pod dns-3892/dns-test-2d574cb9-2043-4c30-9145-acd51858c2f8: the server could not find the requested resource (get pods dns-test-2d574cb9-2043-4c30-9145-acd51858c2f8) Apr 28 13:45:34.820: INFO: Lookups using dns-3892/dns-test-2d574cb9-2043-4c30-9145-acd51858c2f8 failed for: [wheezy_udp@dns-test-service.dns-3892.svc.cluster.local wheezy_tcp@dns-test-service.dns-3892.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3892.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3892.svc.cluster.local jessie_udp@dns-test-service.dns-3892.svc.cluster.local jessie_tcp@dns-test-service.dns-3892.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3892.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3892.svc.cluster.local] Apr 28 13:45:39.825: INFO: Unable to read wheezy_udp@dns-test-service.dns-3892.svc.cluster.local from pod dns-3892/dns-test-2d574cb9-2043-4c30-9145-acd51858c2f8: the server could not find the requested resource (get pods dns-test-2d574cb9-2043-4c30-9145-acd51858c2f8) Apr 28 13:45:39.829: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3892.svc.cluster.local from pod dns-3892/dns-test-2d574cb9-2043-4c30-9145-acd51858c2f8: the server could not find the requested resource (get pods dns-test-2d574cb9-2043-4c30-9145-acd51858c2f8) Apr 28 13:45:39.832: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3892.svc.cluster.local from pod dns-3892/dns-test-2d574cb9-2043-4c30-9145-acd51858c2f8: the server could not find the requested resource (get pods dns-test-2d574cb9-2043-4c30-9145-acd51858c2f8) Apr 28 13:45:39.835: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3892.svc.cluster.local from pod dns-3892/dns-test-2d574cb9-2043-4c30-9145-acd51858c2f8: the server could not find the requested resource (get pods dns-test-2d574cb9-2043-4c30-9145-acd51858c2f8) Apr 28 13:45:39.858: INFO: Unable to read jessie_udp@dns-test-service.dns-3892.svc.cluster.local from pod dns-3892/dns-test-2d574cb9-2043-4c30-9145-acd51858c2f8: the server could not find the requested resource (get pods dns-test-2d574cb9-2043-4c30-9145-acd51858c2f8) Apr 28 13:45:39.861: INFO: Unable to read jessie_tcp@dns-test-service.dns-3892.svc.cluster.local from pod dns-3892/dns-test-2d574cb9-2043-4c30-9145-acd51858c2f8: the server could not find the requested resource (get pods dns-test-2d574cb9-2043-4c30-9145-acd51858c2f8) Apr 28 13:45:39.863: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3892.svc.cluster.local from pod dns-3892/dns-test-2d574cb9-2043-4c30-9145-acd51858c2f8: the server could not find the requested resource (get pods dns-test-2d574cb9-2043-4c30-9145-acd51858c2f8) Apr 28 13:45:39.866: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3892.svc.cluster.local from pod dns-3892/dns-test-2d574cb9-2043-4c30-9145-acd51858c2f8: the server could not find the requested resource (get pods dns-test-2d574cb9-2043-4c30-9145-acd51858c2f8) Apr 28 13:45:39.883: INFO: Lookups using dns-3892/dns-test-2d574cb9-2043-4c30-9145-acd51858c2f8 failed for: [wheezy_udp@dns-test-service.dns-3892.svc.cluster.local wheezy_tcp@dns-test-service.dns-3892.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3892.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3892.svc.cluster.local jessie_udp@dns-test-service.dns-3892.svc.cluster.local jessie_tcp@dns-test-service.dns-3892.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3892.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3892.svc.cluster.local] Apr 28 13:45:44.825: INFO: Unable to read wheezy_udp@dns-test-service.dns-3892.svc.cluster.local from pod dns-3892/dns-test-2d574cb9-2043-4c30-9145-acd51858c2f8: the server could not find the requested resource (get pods dns-test-2d574cb9-2043-4c30-9145-acd51858c2f8) Apr 28 13:45:44.829: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3892.svc.cluster.local from pod dns-3892/dns-test-2d574cb9-2043-4c30-9145-acd51858c2f8: the server could not find the requested resource (get pods dns-test-2d574cb9-2043-4c30-9145-acd51858c2f8) Apr 28 13:45:44.832: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3892.svc.cluster.local from pod dns-3892/dns-test-2d574cb9-2043-4c30-9145-acd51858c2f8: the server could not find the requested resource (get pods dns-test-2d574cb9-2043-4c30-9145-acd51858c2f8) Apr 28 13:45:44.835: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3892.svc.cluster.local from pod dns-3892/dns-test-2d574cb9-2043-4c30-9145-acd51858c2f8: the server could not find the requested resource (get pods dns-test-2d574cb9-2043-4c30-9145-acd51858c2f8) Apr 28 13:45:44.856: INFO: Unable to read jessie_udp@dns-test-service.dns-3892.svc.cluster.local from pod dns-3892/dns-test-2d574cb9-2043-4c30-9145-acd51858c2f8: the server could not find the requested resource (get pods dns-test-2d574cb9-2043-4c30-9145-acd51858c2f8) Apr 28 13:45:44.859: INFO: Unable to read jessie_tcp@dns-test-service.dns-3892.svc.cluster.local from pod dns-3892/dns-test-2d574cb9-2043-4c30-9145-acd51858c2f8: the server could not find the requested resource (get pods dns-test-2d574cb9-2043-4c30-9145-acd51858c2f8) Apr 28 13:45:44.862: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3892.svc.cluster.local from pod dns-3892/dns-test-2d574cb9-2043-4c30-9145-acd51858c2f8: the server could not find the requested resource (get pods dns-test-2d574cb9-2043-4c30-9145-acd51858c2f8) Apr 28 13:45:44.865: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3892.svc.cluster.local from pod dns-3892/dns-test-2d574cb9-2043-4c30-9145-acd51858c2f8: the server could not find the requested resource (get pods dns-test-2d574cb9-2043-4c30-9145-acd51858c2f8) Apr 28 13:45:44.882: INFO: Lookups using dns-3892/dns-test-2d574cb9-2043-4c30-9145-acd51858c2f8 failed for: [wheezy_udp@dns-test-service.dns-3892.svc.cluster.local wheezy_tcp@dns-test-service.dns-3892.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3892.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3892.svc.cluster.local jessie_udp@dns-test-service.dns-3892.svc.cluster.local jessie_tcp@dns-test-service.dns-3892.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3892.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3892.svc.cluster.local] Apr 28 13:45:49.824: INFO: Unable to read wheezy_udp@dns-test-service.dns-3892.svc.cluster.local from pod dns-3892/dns-test-2d574cb9-2043-4c30-9145-acd51858c2f8: the server could not find the requested resource (get pods dns-test-2d574cb9-2043-4c30-9145-acd51858c2f8) Apr 28 13:45:49.827: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3892.svc.cluster.local from pod dns-3892/dns-test-2d574cb9-2043-4c30-9145-acd51858c2f8: the server could not find the requested resource (get pods dns-test-2d574cb9-2043-4c30-9145-acd51858c2f8) Apr 28 13:45:49.830: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3892.svc.cluster.local from pod dns-3892/dns-test-2d574cb9-2043-4c30-9145-acd51858c2f8: the server could not find the requested resource (get pods dns-test-2d574cb9-2043-4c30-9145-acd51858c2f8) Apr 28 13:45:49.833: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3892.svc.cluster.local from pod dns-3892/dns-test-2d574cb9-2043-4c30-9145-acd51858c2f8: the server could not find the requested resource (get pods dns-test-2d574cb9-2043-4c30-9145-acd51858c2f8) Apr 28 13:45:49.852: INFO: Unable to read jessie_udp@dns-test-service.dns-3892.svc.cluster.local from pod dns-3892/dns-test-2d574cb9-2043-4c30-9145-acd51858c2f8: the server could not find the requested resource (get pods dns-test-2d574cb9-2043-4c30-9145-acd51858c2f8) Apr 28 13:45:49.855: INFO: Unable to read jessie_tcp@dns-test-service.dns-3892.svc.cluster.local from pod dns-3892/dns-test-2d574cb9-2043-4c30-9145-acd51858c2f8: the server could not find the requested resource (get pods dns-test-2d574cb9-2043-4c30-9145-acd51858c2f8) Apr 28 13:45:49.859: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3892.svc.cluster.local from pod dns-3892/dns-test-2d574cb9-2043-4c30-9145-acd51858c2f8: the server could not find the requested resource (get pods dns-test-2d574cb9-2043-4c30-9145-acd51858c2f8) Apr 28 13:45:49.861: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3892.svc.cluster.local from pod dns-3892/dns-test-2d574cb9-2043-4c30-9145-acd51858c2f8: the server could not find the requested resource (get pods dns-test-2d574cb9-2043-4c30-9145-acd51858c2f8) Apr 28 13:45:49.878: INFO: Lookups using dns-3892/dns-test-2d574cb9-2043-4c30-9145-acd51858c2f8 failed for: [wheezy_udp@dns-test-service.dns-3892.svc.cluster.local wheezy_tcp@dns-test-service.dns-3892.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3892.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3892.svc.cluster.local jessie_udp@dns-test-service.dns-3892.svc.cluster.local jessie_tcp@dns-test-service.dns-3892.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3892.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3892.svc.cluster.local] Apr 28 13:45:54.825: INFO: Unable to read wheezy_udp@dns-test-service.dns-3892.svc.cluster.local from pod dns-3892/dns-test-2d574cb9-2043-4c30-9145-acd51858c2f8: the server could not find the requested resource (get pods dns-test-2d574cb9-2043-4c30-9145-acd51858c2f8) Apr 28 13:45:54.828: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3892.svc.cluster.local from pod dns-3892/dns-test-2d574cb9-2043-4c30-9145-acd51858c2f8: the server could not find the requested resource (get pods dns-test-2d574cb9-2043-4c30-9145-acd51858c2f8) Apr 28 13:45:54.832: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3892.svc.cluster.local from pod dns-3892/dns-test-2d574cb9-2043-4c30-9145-acd51858c2f8: the server could not find the requested resource (get pods dns-test-2d574cb9-2043-4c30-9145-acd51858c2f8) Apr 28 13:45:54.835: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3892.svc.cluster.local from pod dns-3892/dns-test-2d574cb9-2043-4c30-9145-acd51858c2f8: the server could not find the requested resource (get pods dns-test-2d574cb9-2043-4c30-9145-acd51858c2f8) Apr 28 13:45:54.855: INFO: Unable to read jessie_udp@dns-test-service.dns-3892.svc.cluster.local from pod dns-3892/dns-test-2d574cb9-2043-4c30-9145-acd51858c2f8: the server could not find the requested resource (get pods dns-test-2d574cb9-2043-4c30-9145-acd51858c2f8) Apr 28 13:45:54.857: INFO: Unable to read jessie_tcp@dns-test-service.dns-3892.svc.cluster.local from pod dns-3892/dns-test-2d574cb9-2043-4c30-9145-acd51858c2f8: the server could not find the requested resource (get pods dns-test-2d574cb9-2043-4c30-9145-acd51858c2f8) Apr 28 13:45:54.860: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3892.svc.cluster.local from pod dns-3892/dns-test-2d574cb9-2043-4c30-9145-acd51858c2f8: the server could not find the requested resource (get pods dns-test-2d574cb9-2043-4c30-9145-acd51858c2f8) Apr 28 13:45:54.862: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3892.svc.cluster.local from pod dns-3892/dns-test-2d574cb9-2043-4c30-9145-acd51858c2f8: the server could not find the requested resource (get pods dns-test-2d574cb9-2043-4c30-9145-acd51858c2f8) Apr 28 13:45:54.880: INFO: Lookups using dns-3892/dns-test-2d574cb9-2043-4c30-9145-acd51858c2f8 failed for: [wheezy_udp@dns-test-service.dns-3892.svc.cluster.local wheezy_tcp@dns-test-service.dns-3892.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3892.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3892.svc.cluster.local jessie_udp@dns-test-service.dns-3892.svc.cluster.local jessie_tcp@dns-test-service.dns-3892.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3892.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3892.svc.cluster.local] Apr 28 13:45:59.825: INFO: Unable to read wheezy_udp@dns-test-service.dns-3892.svc.cluster.local from pod dns-3892/dns-test-2d574cb9-2043-4c30-9145-acd51858c2f8: the server could not find the requested resource (get pods dns-test-2d574cb9-2043-4c30-9145-acd51858c2f8) Apr 28 13:45:59.828: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3892.svc.cluster.local from pod dns-3892/dns-test-2d574cb9-2043-4c30-9145-acd51858c2f8: the server could not find the requested resource (get pods dns-test-2d574cb9-2043-4c30-9145-acd51858c2f8) Apr 28 13:45:59.831: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3892.svc.cluster.local from pod dns-3892/dns-test-2d574cb9-2043-4c30-9145-acd51858c2f8: the server could not find the requested resource (get pods dns-test-2d574cb9-2043-4c30-9145-acd51858c2f8) Apr 28 13:45:59.834: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3892.svc.cluster.local from pod dns-3892/dns-test-2d574cb9-2043-4c30-9145-acd51858c2f8: the server could not find the requested resource (get pods dns-test-2d574cb9-2043-4c30-9145-acd51858c2f8) Apr 28 13:45:59.857: INFO: Unable to read jessie_udp@dns-test-service.dns-3892.svc.cluster.local from pod dns-3892/dns-test-2d574cb9-2043-4c30-9145-acd51858c2f8: the server could not find the requested resource (get pods dns-test-2d574cb9-2043-4c30-9145-acd51858c2f8) Apr 28 13:45:59.860: INFO: Unable to read jessie_tcp@dns-test-service.dns-3892.svc.cluster.local from pod dns-3892/dns-test-2d574cb9-2043-4c30-9145-acd51858c2f8: the server could not find the requested resource (get pods dns-test-2d574cb9-2043-4c30-9145-acd51858c2f8) Apr 28 13:45:59.862: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3892.svc.cluster.local from pod dns-3892/dns-test-2d574cb9-2043-4c30-9145-acd51858c2f8: the server could not find the requested resource (get pods dns-test-2d574cb9-2043-4c30-9145-acd51858c2f8) Apr 28 13:45:59.865: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3892.svc.cluster.local from pod dns-3892/dns-test-2d574cb9-2043-4c30-9145-acd51858c2f8: the server could not find the requested resource (get pods dns-test-2d574cb9-2043-4c30-9145-acd51858c2f8) Apr 28 13:45:59.883: INFO: Lookups using dns-3892/dns-test-2d574cb9-2043-4c30-9145-acd51858c2f8 failed for: [wheezy_udp@dns-test-service.dns-3892.svc.cluster.local wheezy_tcp@dns-test-service.dns-3892.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3892.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3892.svc.cluster.local jessie_udp@dns-test-service.dns-3892.svc.cluster.local jessie_tcp@dns-test-service.dns-3892.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3892.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3892.svc.cluster.local] Apr 28 13:46:04.875: INFO: DNS probes using dns-3892/dns-test-2d574cb9-2043-4c30-9145-acd51858c2f8 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 13:46:05.728: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3892" for this suite. Apr 28 13:46:11.754: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:46:11.859: INFO: namespace dns-3892 deletion completed in 6.119123324s • [SLOW TEST:43.334 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 13:46:11.860: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on tmpfs Apr 28 13:46:11.941: INFO: Waiting up to 5m0s for pod "pod-55571ff9-8497-4cb3-8352-f3f8cf11b613" in namespace "emptydir-7991" to be "success or failure" Apr 28 13:46:11.944: INFO: Pod "pod-55571ff9-8497-4cb3-8352-f3f8cf11b613": Phase="Pending", Reason="", readiness=false. Elapsed: 2.148385ms Apr 28 13:46:13.948: INFO: Pod "pod-55571ff9-8497-4cb3-8352-f3f8cf11b613": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006177294s Apr 28 13:46:15.980: INFO: Pod "pod-55571ff9-8497-4cb3-8352-f3f8cf11b613": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038581735s Apr 28 13:46:17.984: INFO: Pod "pod-55571ff9-8497-4cb3-8352-f3f8cf11b613": Phase="Pending", Reason="", readiness=false. Elapsed: 6.042737303s Apr 28 13:46:19.988: INFO: Pod "pod-55571ff9-8497-4cb3-8352-f3f8cf11b613": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.046646409s STEP: Saw pod success Apr 28 13:46:19.988: INFO: Pod "pod-55571ff9-8497-4cb3-8352-f3f8cf11b613" satisfied condition "success or failure" Apr 28 13:46:19.991: INFO: Trying to get logs from node iruya-worker pod pod-55571ff9-8497-4cb3-8352-f3f8cf11b613 container test-container: STEP: delete the pod Apr 28 13:46:20.005: INFO: Waiting for pod pod-55571ff9-8497-4cb3-8352-f3f8cf11b613 to disappear Apr 28 13:46:20.025: INFO: Pod pod-55571ff9-8497-4cb3-8352-f3f8cf11b613 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 13:46:20.025: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7991" for this suite. Apr 28 13:46:26.068: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:46:26.142: INFO: namespace emptydir-7991 deletion completed in 6.11206699s • [SLOW TEST:14.282 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 13:46:26.142: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Apr 28 13:46:26.330: INFO: PodSpec: initContainers in spec.initContainers Apr 28 13:47:20.486: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-2bf6ea2e-687e-4bb2-a91c-b2df8444f3f2", GenerateName:"", Namespace:"init-container-8534", SelfLink:"/api/v1/namespaces/init-container-8534/pods/pod-init-2bf6ea2e-687e-4bb2-a91c-b2df8444f3f2", UID:"abaa0b67-928e-4391-9ddb-88ea89c900d9", ResourceVersion:"7904223", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63723678386, loc:(*time.Location)(0x7ead8c0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"330566519"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-mkrbt", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc001a3c040), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-mkrbt", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-mkrbt", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-mkrbt", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0025fc088), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"iruya-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002d0c0c0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0025fc110)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0025fc130)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0025fc138), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0025fc13c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723678386, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723678386, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723678386, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723678386, loc:(*time.Location)(0x7ead8c0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.6", PodIP:"10.244.2.81", StartTime:(*v1.Time)(0xc002f8a060), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc002f8a0a0), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0025a2070)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://dca832c5169fef2255efe4f811b4821210f9d31ac6cff768dfc4135a608138aa"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002f8a0c0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002f8a080), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 13:47:20.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-8534" for this suite. Apr 28 13:47:42.542: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:47:42.613: INFO: namespace init-container-8534 deletion completed in 22.116066952s • [SLOW TEST:76.471 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 13:47:42.615: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name projected-secret-test-a14c737a-2ff0-47b3-a425-c8ed12bbc59d STEP: Creating a pod to test consume secrets Apr 28 13:47:42.694: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-577e6737-f25e-47b6-b7a6-1eed39c58da4" in namespace "projected-6719" to be "success or failure" Apr 28 13:47:42.699: INFO: Pod "pod-projected-secrets-577e6737-f25e-47b6-b7a6-1eed39c58da4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.417418ms Apr 28 13:47:44.703: INFO: Pod "pod-projected-secrets-577e6737-f25e-47b6-b7a6-1eed39c58da4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008576617s Apr 28 13:47:46.708: INFO: Pod "pod-projected-secrets-577e6737-f25e-47b6-b7a6-1eed39c58da4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013122041s STEP: Saw pod success Apr 28 13:47:46.708: INFO: Pod "pod-projected-secrets-577e6737-f25e-47b6-b7a6-1eed39c58da4" satisfied condition "success or failure" Apr 28 13:47:46.711: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-577e6737-f25e-47b6-b7a6-1eed39c58da4 container secret-volume-test: STEP: delete the pod Apr 28 13:47:46.743: INFO: Waiting for pod pod-projected-secrets-577e6737-f25e-47b6-b7a6-1eed39c58da4 to disappear Apr 28 13:47:46.758: INFO: Pod pod-projected-secrets-577e6737-f25e-47b6-b7a6-1eed39c58da4 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 13:47:46.758: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6719" for this suite. Apr 28 13:47:52.773: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:47:52.874: INFO: namespace projected-6719 deletion completed in 6.112559639s • [SLOW TEST:10.259 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 13:47:52.875: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0428 13:48:02.950186 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 28 13:48:02.950: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 13:48:02.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1924" for this suite. Apr 28 13:48:08.967: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:48:09.045: INFO: namespace gc-1924 deletion completed in 6.09141418s • [SLOW TEST:16.170 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 13:48:09.045: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-downwardapi-rhts STEP: Creating a pod to test atomic-volume-subpath Apr 28 13:48:09.145: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-rhts" in namespace "subpath-4301" to be "success or failure" Apr 28 13:48:09.164: INFO: Pod "pod-subpath-test-downwardapi-rhts": Phase="Pending", Reason="", readiness=false. Elapsed: 19.008619ms Apr 28 13:48:11.169: INFO: Pod "pod-subpath-test-downwardapi-rhts": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023553415s Apr 28 13:48:13.173: INFO: Pod "pod-subpath-test-downwardapi-rhts": Phase="Running", Reason="", readiness=true. Elapsed: 4.028065565s Apr 28 13:48:15.178: INFO: Pod "pod-subpath-test-downwardapi-rhts": Phase="Running", Reason="", readiness=true. Elapsed: 6.032522964s Apr 28 13:48:17.182: INFO: Pod "pod-subpath-test-downwardapi-rhts": Phase="Running", Reason="", readiness=true. Elapsed: 8.036189692s Apr 28 13:48:19.195: INFO: Pod "pod-subpath-test-downwardapi-rhts": Phase="Running", Reason="", readiness=true. Elapsed: 10.049720031s Apr 28 13:48:21.200: INFO: Pod "pod-subpath-test-downwardapi-rhts": Phase="Running", Reason="", readiness=true. Elapsed: 12.054166408s Apr 28 13:48:23.203: INFO: Pod "pod-subpath-test-downwardapi-rhts": Phase="Running", Reason="", readiness=true. Elapsed: 14.057993302s Apr 28 13:48:25.208: INFO: Pod "pod-subpath-test-downwardapi-rhts": Phase="Running", Reason="", readiness=true. Elapsed: 16.062678391s Apr 28 13:48:27.212: INFO: Pod "pod-subpath-test-downwardapi-rhts": Phase="Running", Reason="", readiness=true. Elapsed: 18.066767128s Apr 28 13:48:29.216: INFO: Pod "pod-subpath-test-downwardapi-rhts": Phase="Running", Reason="", readiness=true. Elapsed: 20.070845407s Apr 28 13:48:31.221: INFO: Pod "pod-subpath-test-downwardapi-rhts": Phase="Running", Reason="", readiness=true. Elapsed: 22.07552295s Apr 28 13:48:33.225: INFO: Pod "pod-subpath-test-downwardapi-rhts": Phase="Running", Reason="", readiness=true. Elapsed: 24.079537362s Apr 28 13:48:35.228: INFO: Pod "pod-subpath-test-downwardapi-rhts": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.08297994s STEP: Saw pod success Apr 28 13:48:35.228: INFO: Pod "pod-subpath-test-downwardapi-rhts" satisfied condition "success or failure" Apr 28 13:48:35.231: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-downwardapi-rhts container test-container-subpath-downwardapi-rhts: STEP: delete the pod Apr 28 13:48:35.251: INFO: Waiting for pod pod-subpath-test-downwardapi-rhts to disappear Apr 28 13:48:35.256: INFO: Pod pod-subpath-test-downwardapi-rhts no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-rhts Apr 28 13:48:35.256: INFO: Deleting pod "pod-subpath-test-downwardapi-rhts" in namespace "subpath-4301" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 13:48:35.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-4301" for this suite. Apr 28 13:48:41.308: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:48:41.423: INFO: namespace subpath-4301 deletion completed in 6.140117181s • [SLOW TEST:32.377 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 13:48:41.423: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-9468d422-0dd6-49d5-81b1-0d82470719db STEP: Creating a pod to test consume configMaps Apr 28 13:48:41.474: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b52de035-0e61-48ed-a0c6-e329d1a1396c" in namespace "projected-8519" to be "success or failure" Apr 28 13:48:41.490: INFO: Pod "pod-projected-configmaps-b52de035-0e61-48ed-a0c6-e329d1a1396c": Phase="Pending", Reason="", readiness=false. Elapsed: 15.275795ms Apr 28 13:48:43.493: INFO: Pod "pod-projected-configmaps-b52de035-0e61-48ed-a0c6-e329d1a1396c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019069172s Apr 28 13:48:45.498: INFO: Pod "pod-projected-configmaps-b52de035-0e61-48ed-a0c6-e329d1a1396c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023498504s STEP: Saw pod success Apr 28 13:48:45.498: INFO: Pod "pod-projected-configmaps-b52de035-0e61-48ed-a0c6-e329d1a1396c" satisfied condition "success or failure" Apr 28 13:48:45.501: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-b52de035-0e61-48ed-a0c6-e329d1a1396c container projected-configmap-volume-test: STEP: delete the pod Apr 28 13:48:45.523: INFO: Waiting for pod pod-projected-configmaps-b52de035-0e61-48ed-a0c6-e329d1a1396c to disappear Apr 28 13:48:45.558: INFO: Pod pod-projected-configmaps-b52de035-0e61-48ed-a0c6-e329d1a1396c no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 13:48:45.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8519" for this suite. Apr 28 13:48:51.573: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:48:51.719: INFO: namespace projected-8519 deletion completed in 6.158077821s • [SLOW TEST:10.296 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 13:48:51.719: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 28 13:48:51.757: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e26e11e9-58dc-45e1-a945-86df28d37c2a" in namespace "downward-api-6543" to be "success or failure" Apr 28 13:48:51.775: INFO: Pod "downwardapi-volume-e26e11e9-58dc-45e1-a945-86df28d37c2a": Phase="Pending", Reason="", readiness=false. Elapsed: 17.948224ms Apr 28 13:48:53.779: INFO: Pod "downwardapi-volume-e26e11e9-58dc-45e1-a945-86df28d37c2a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022178307s Apr 28 13:48:55.783: INFO: Pod "downwardapi-volume-e26e11e9-58dc-45e1-a945-86df28d37c2a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025847711s STEP: Saw pod success Apr 28 13:48:55.783: INFO: Pod "downwardapi-volume-e26e11e9-58dc-45e1-a945-86df28d37c2a" satisfied condition "success or failure" Apr 28 13:48:55.785: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-e26e11e9-58dc-45e1-a945-86df28d37c2a container client-container: STEP: delete the pod Apr 28 13:48:55.859: INFO: Waiting for pod downwardapi-volume-e26e11e9-58dc-45e1-a945-86df28d37c2a to disappear Apr 28 13:48:55.863: INFO: Pod downwardapi-volume-e26e11e9-58dc-45e1-a945-86df28d37c2a no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 13:48:55.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6543" for this suite. Apr 28 13:49:01.879: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:49:01.958: INFO: namespace downward-api-6543 deletion completed in 6.09215711s • [SLOW TEST:10.239 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 13:49:01.959: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Apr 28 13:49:02.021: INFO: Waiting up to 5m0s for pod "downward-api-fd7b4459-47e3-412f-9d67-c72ea3411493" in namespace "downward-api-983" to be "success or failure" Apr 28 13:49:02.031: INFO: Pod "downward-api-fd7b4459-47e3-412f-9d67-c72ea3411493": Phase="Pending", Reason="", readiness=false. Elapsed: 10.035557ms Apr 28 13:49:04.035: INFO: Pod "downward-api-fd7b4459-47e3-412f-9d67-c72ea3411493": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014528756s Apr 28 13:49:06.039: INFO: Pod "downward-api-fd7b4459-47e3-412f-9d67-c72ea3411493": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018433857s STEP: Saw pod success Apr 28 13:49:06.039: INFO: Pod "downward-api-fd7b4459-47e3-412f-9d67-c72ea3411493" satisfied condition "success or failure" Apr 28 13:49:06.042: INFO: Trying to get logs from node iruya-worker2 pod downward-api-fd7b4459-47e3-412f-9d67-c72ea3411493 container dapi-container: STEP: delete the pod Apr 28 13:49:06.083: INFO: Waiting for pod downward-api-fd7b4459-47e3-412f-9d67-c72ea3411493 to disappear Apr 28 13:49:06.094: INFO: Pod downward-api-fd7b4459-47e3-412f-9d67-c72ea3411493 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 13:49:06.095: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-983" for this suite. Apr 28 13:49:12.117: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:49:12.194: INFO: namespace downward-api-983 deletion completed in 6.096079036s • [SLOW TEST:10.235 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 13:49:12.195: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 28 13:49:12.240: INFO: Waiting up to 5m0s for pod "downwardapi-volume-be98cbd4-2c15-47c4-ab8b-f911c07a989a" in namespace "projected-8135" to be "success or failure" Apr 28 13:49:12.273: INFO: Pod "downwardapi-volume-be98cbd4-2c15-47c4-ab8b-f911c07a989a": Phase="Pending", Reason="", readiness=false. Elapsed: 32.716043ms Apr 28 13:49:14.277: INFO: Pod "downwardapi-volume-be98cbd4-2c15-47c4-ab8b-f911c07a989a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036808657s Apr 28 13:49:16.281: INFO: Pod "downwardapi-volume-be98cbd4-2c15-47c4-ab8b-f911c07a989a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.040491772s STEP: Saw pod success Apr 28 13:49:16.281: INFO: Pod "downwardapi-volume-be98cbd4-2c15-47c4-ab8b-f911c07a989a" satisfied condition "success or failure" Apr 28 13:49:16.283: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-be98cbd4-2c15-47c4-ab8b-f911c07a989a container client-container: STEP: delete the pod Apr 28 13:49:16.357: INFO: Waiting for pod downwardapi-volume-be98cbd4-2c15-47c4-ab8b-f911c07a989a to disappear Apr 28 13:49:16.359: INFO: Pod downwardapi-volume-be98cbd4-2c15-47c4-ab8b-f911c07a989a no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 13:49:16.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8135" for this suite. Apr 28 13:49:22.374: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:49:22.453: INFO: namespace projected-8135 deletion completed in 6.0903161s • [SLOW TEST:10.258 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 13:49:22.454: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1420 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Apr 28 13:49:22.524: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-6968' Apr 28 13:49:22.637: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Apr 28 13:49:22.637: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created [AfterEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1426 Apr 28 13:49:22.666: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-6968' Apr 28 13:49:22.777: INFO: stderr: "" Apr 28 13:49:22.777: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 13:49:22.777: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6968" for this suite. Apr 28 13:49:28.789: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:49:28.883: INFO: namespace kubectl-6968 deletion completed in 6.103351207s • [SLOW TEST:6.430 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 13:49:28.884: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 28 13:49:29.098: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"6e89a166-995e-4ac2-a6cb-3c84dcd325ae", Controller:(*bool)(0xc002a26802), BlockOwnerDeletion:(*bool)(0xc002a26803)}} Apr 28 13:49:29.149: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"99449d00-c835-4ed9-a7b0-77f98f633c51", Controller:(*bool)(0xc002478a12), BlockOwnerDeletion:(*bool)(0xc002478a13)}} Apr 28 13:49:29.181: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"ffe2f009-9452-403f-a314-9fcd8928ed1f", Controller:(*bool)(0xc001b7cbca), BlockOwnerDeletion:(*bool)(0xc001b7cbcb)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 13:49:34.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9720" for this suite. Apr 28 13:49:40.229: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:49:40.306: INFO: namespace gc-9720 deletion completed in 6.103515754s • [SLOW TEST:11.422 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 13:49:40.307: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 13:50:40.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-841" for this suite. Apr 28 13:51:02.408: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:51:02.486: INFO: namespace container-probe-841 deletion completed in 22.090506631s • [SLOW TEST:82.179 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 13:51:02.486: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 28 13:51:02.555: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5ad6dc53-c161-4708-bcc9-ed18110a801f" in namespace "downward-api-1577" to be "success or failure" Apr 28 13:51:02.558: INFO: Pod "downwardapi-volume-5ad6dc53-c161-4708-bcc9-ed18110a801f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.533802ms Apr 28 13:51:04.562: INFO: Pod "downwardapi-volume-5ad6dc53-c161-4708-bcc9-ed18110a801f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007157056s Apr 28 13:51:06.566: INFO: Pod "downwardapi-volume-5ad6dc53-c161-4708-bcc9-ed18110a801f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011358921s STEP: Saw pod success Apr 28 13:51:06.566: INFO: Pod "downwardapi-volume-5ad6dc53-c161-4708-bcc9-ed18110a801f" satisfied condition "success or failure" Apr 28 13:51:06.569: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-5ad6dc53-c161-4708-bcc9-ed18110a801f container client-container: STEP: delete the pod Apr 28 13:51:06.603: INFO: Waiting for pod downwardapi-volume-5ad6dc53-c161-4708-bcc9-ed18110a801f to disappear Apr 28 13:51:06.618: INFO: Pod downwardapi-volume-5ad6dc53-c161-4708-bcc9-ed18110a801f no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 13:51:06.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1577" for this suite. Apr 28 13:51:12.633: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:51:12.711: INFO: namespace downward-api-1577 deletion completed in 6.089948607s • [SLOW TEST:10.225 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 13:51:12.711: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 13:51:18.979: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-7422" for this suite. Apr 28 13:51:24.993: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:51:25.061: INFO: namespace namespaces-7422 deletion completed in 6.078368202s STEP: Destroying namespace "nsdeletetest-6351" for this suite. Apr 28 13:51:25.063: INFO: Namespace nsdeletetest-6351 was already deleted STEP: Destroying namespace "nsdeletetest-7709" for this suite. Apr 28 13:51:31.074: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:51:31.141: INFO: namespace nsdeletetest-7709 deletion completed in 6.078587667s • [SLOW TEST:18.430 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 13:51:31.142: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-fa355fb1-d190-471f-bf44-356931635eda STEP: Creating a pod to test consume secrets Apr 28 13:51:31.270: INFO: Waiting up to 5m0s for pod "pod-secrets-cc5af36d-dc86-46ed-9521-83161cbf8ac1" in namespace "secrets-7524" to be "success or failure" Apr 28 13:51:31.281: INFO: Pod "pod-secrets-cc5af36d-dc86-46ed-9521-83161cbf8ac1": Phase="Pending", Reason="", readiness=false. Elapsed: 11.575168ms Apr 28 13:51:33.285: INFO: Pod "pod-secrets-cc5af36d-dc86-46ed-9521-83161cbf8ac1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015389415s Apr 28 13:51:35.290: INFO: Pod "pod-secrets-cc5af36d-dc86-46ed-9521-83161cbf8ac1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01988236s STEP: Saw pod success Apr 28 13:51:35.290: INFO: Pod "pod-secrets-cc5af36d-dc86-46ed-9521-83161cbf8ac1" satisfied condition "success or failure" Apr 28 13:51:35.293: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-cc5af36d-dc86-46ed-9521-83161cbf8ac1 container secret-volume-test: STEP: delete the pod Apr 28 13:51:35.347: INFO: Waiting for pod pod-secrets-cc5af36d-dc86-46ed-9521-83161cbf8ac1 to disappear Apr 28 13:51:35.358: INFO: Pod pod-secrets-cc5af36d-dc86-46ed-9521-83161cbf8ac1 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 13:51:35.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7524" for this suite. Apr 28 13:51:41.373: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:51:41.478: INFO: namespace secrets-7524 deletion completed in 6.117188607s STEP: Destroying namespace "secret-namespace-7285" for this suite. Apr 28 13:51:47.492: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:51:47.560: INFO: namespace secret-namespace-7285 deletion completed in 6.08151966s • [SLOW TEST:16.418 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 13:51:47.561: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 28 13:51:47.663: INFO: Waiting up to 5m0s for pod "downwardapi-volume-92a43ef0-b4f5-487e-ba3e-4acc5e331a84" in namespace "projected-3045" to be "success or failure" Apr 28 13:51:47.673: INFO: Pod "downwardapi-volume-92a43ef0-b4f5-487e-ba3e-4acc5e331a84": Phase="Pending", Reason="", readiness=false. Elapsed: 9.943105ms Apr 28 13:51:49.677: INFO: Pod "downwardapi-volume-92a43ef0-b4f5-487e-ba3e-4acc5e331a84": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014339983s Apr 28 13:51:51.681: INFO: Pod "downwardapi-volume-92a43ef0-b4f5-487e-ba3e-4acc5e331a84": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018598263s STEP: Saw pod success Apr 28 13:51:51.681: INFO: Pod "downwardapi-volume-92a43ef0-b4f5-487e-ba3e-4acc5e331a84" satisfied condition "success or failure" Apr 28 13:51:51.684: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-92a43ef0-b4f5-487e-ba3e-4acc5e331a84 container client-container: STEP: delete the pod Apr 28 13:51:51.705: INFO: Waiting for pod downwardapi-volume-92a43ef0-b4f5-487e-ba3e-4acc5e331a84 to disappear Apr 28 13:51:51.708: INFO: Pod downwardapi-volume-92a43ef0-b4f5-487e-ba3e-4acc5e331a84 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 13:51:51.708: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3045" for this suite. Apr 28 13:51:57.724: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:51:57.809: INFO: namespace projected-3045 deletion completed in 6.098021248s • [SLOW TEST:10.248 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 13:51:57.810: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 13:52:01.952: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-5763" for this suite. Apr 28 13:52:39.970: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:52:40.044: INFO: namespace kubelet-test-5763 deletion completed in 38.088815874s • [SLOW TEST:42.235 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 13:52:40.045: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-9362 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-9362 STEP: Creating statefulset with conflicting port in namespace statefulset-9362 STEP: Waiting until pod test-pod will start running in namespace statefulset-9362 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-9362 Apr 28 13:52:44.186: INFO: Observed stateful pod in namespace: statefulset-9362, name: ss-0, uid: 4cc34f4f-bd99-4b9c-b9c3-e60f905753d4, status phase: Pending. Waiting for statefulset controller to delete. Apr 28 13:52:44.544: INFO: Observed stateful pod in namespace: statefulset-9362, name: ss-0, uid: 4cc34f4f-bd99-4b9c-b9c3-e60f905753d4, status phase: Failed. Waiting for statefulset controller to delete. Apr 28 13:52:44.561: INFO: Observed stateful pod in namespace: statefulset-9362, name: ss-0, uid: 4cc34f4f-bd99-4b9c-b9c3-e60f905753d4, status phase: Failed. Waiting for statefulset controller to delete. Apr 28 13:52:44.650: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-9362 STEP: Removing pod with conflicting port in namespace statefulset-9362 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-9362 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Apr 28 13:52:50.735: INFO: Deleting all statefulset in ns statefulset-9362 Apr 28 13:52:50.738: INFO: Scaling statefulset ss to 0 Apr 28 13:53:00.759: INFO: Waiting for statefulset status.replicas updated to 0 Apr 28 13:53:00.763: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 13:53:00.781: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9362" for this suite. Apr 28 13:53:06.794: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:53:06.895: INFO: namespace statefulset-9362 deletion completed in 6.11071893s • [SLOW TEST:26.849 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 13:53:06.895: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Apr 28 13:53:07.004: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-7355,SelfLink:/api/v1/namespaces/watch-7355/configmaps/e2e-watch-test-watch-closed,UID:1357d69c-7d3e-4593-a4bd-8044e42e26fb,ResourceVersion:7905506,Generation:0,CreationTimestamp:2020-04-28 13:53:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Apr 28 13:53:07.004: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-7355,SelfLink:/api/v1/namespaces/watch-7355/configmaps/e2e-watch-test-watch-closed,UID:1357d69c-7d3e-4593-a4bd-8044e42e26fb,ResourceVersion:7905507,Generation:0,CreationTimestamp:2020-04-28 13:53:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Apr 28 13:53:07.016: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-7355,SelfLink:/api/v1/namespaces/watch-7355/configmaps/e2e-watch-test-watch-closed,UID:1357d69c-7d3e-4593-a4bd-8044e42e26fb,ResourceVersion:7905508,Generation:0,CreationTimestamp:2020-04-28 13:53:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Apr 28 13:53:07.016: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-7355,SelfLink:/api/v1/namespaces/watch-7355/configmaps/e2e-watch-test-watch-closed,UID:1357d69c-7d3e-4593-a4bd-8044e42e26fb,ResourceVersion:7905509,Generation:0,CreationTimestamp:2020-04-28 13:53:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 13:53:07.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-7355" for this suite. Apr 28 13:53:13.062: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:53:13.145: INFO: namespace watch-7355 deletion completed in 6.125384904s • [SLOW TEST:6.250 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 13:53:13.146: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0428 13:53:53.531545 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 28 13:53:53.531: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 13:53:53.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8079" for this suite. Apr 28 13:54:01.628: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:54:01.693: INFO: namespace gc-8079 deletion completed in 8.158987525s • [SLOW TEST:48.548 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 13:54:01.694: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's args Apr 28 13:54:02.015: INFO: Waiting up to 5m0s for pod "var-expansion-d853bfd2-0b14-4b48-bd28-aedd15ee2e9f" in namespace "var-expansion-2804" to be "success or failure" Apr 28 13:54:02.099: INFO: Pod "var-expansion-d853bfd2-0b14-4b48-bd28-aedd15ee2e9f": Phase="Pending", Reason="", readiness=false. Elapsed: 83.644096ms Apr 28 13:54:04.103: INFO: Pod "var-expansion-d853bfd2-0b14-4b48-bd28-aedd15ee2e9f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.087987668s Apr 28 13:54:06.107: INFO: Pod "var-expansion-d853bfd2-0b14-4b48-bd28-aedd15ee2e9f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.092524066s STEP: Saw pod success Apr 28 13:54:06.108: INFO: Pod "var-expansion-d853bfd2-0b14-4b48-bd28-aedd15ee2e9f" satisfied condition "success or failure" Apr 28 13:54:06.111: INFO: Trying to get logs from node iruya-worker2 pod var-expansion-d853bfd2-0b14-4b48-bd28-aedd15ee2e9f container dapi-container: STEP: delete the pod Apr 28 13:54:06.140: INFO: Waiting for pod var-expansion-d853bfd2-0b14-4b48-bd28-aedd15ee2e9f to disappear Apr 28 13:54:06.151: INFO: Pod var-expansion-d853bfd2-0b14-4b48-bd28-aedd15ee2e9f no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 13:54:06.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-2804" for this suite. Apr 28 13:54:12.186: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:54:12.268: INFO: namespace var-expansion-2804 deletion completed in 6.094167943s • [SLOW TEST:10.575 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 13:54:12.269: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Apr 28 13:54:12.348: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Registering the sample API server. Apr 28 13:54:12.833: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Apr 28 13:54:15.050: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723678852, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723678852, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723678852, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723678852, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 28 13:54:17.785: INFO: Waited 723.763218ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 13:54:18.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-4103" for this suite. Apr 28 13:54:24.485: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:54:24.560: INFO: namespace aggregator-4103 deletion completed in 6.170773065s • [SLOW TEST:12.292 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 13:54:24.561: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1516 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Apr 28 13:54:24.626: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-4341' Apr 28 13:54:27.125: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Apr 28 13:54:27.125: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created Apr 28 13:54:27.137: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0 Apr 28 13:54:27.167: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller Apr 28 13:54:27.191: INFO: scanned /root for discovery docs: Apr 28 13:54:27.192: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-4341' Apr 28 13:54:43.016: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Apr 28 13:54:43.016: INFO: stdout: "Created e2e-test-nginx-rc-23cc93ef0488bf7dc01b0e6fefef7cf0\nScaling up e2e-test-nginx-rc-23cc93ef0488bf7dc01b0e6fefef7cf0 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-23cc93ef0488bf7dc01b0e6fefef7cf0 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-23cc93ef0488bf7dc01b0e6fefef7cf0 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" Apr 28 13:54:43.016: INFO: stdout: "Created e2e-test-nginx-rc-23cc93ef0488bf7dc01b0e6fefef7cf0\nScaling up e2e-test-nginx-rc-23cc93ef0488bf7dc01b0e6fefef7cf0 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-23cc93ef0488bf7dc01b0e6fefef7cf0 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-23cc93ef0488bf7dc01b0e6fefef7cf0 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up. Apr 28 13:54:43.016: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-4341' Apr 28 13:54:43.112: INFO: stderr: "" Apr 28 13:54:43.112: INFO: stdout: "e2e-test-nginx-rc-23cc93ef0488bf7dc01b0e6fefef7cf0-fxrd7 " Apr 28 13:54:43.112: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-23cc93ef0488bf7dc01b0e6fefef7cf0-fxrd7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4341' Apr 28 13:54:43.216: INFO: stderr: "" Apr 28 13:54:43.216: INFO: stdout: "true" Apr 28 13:54:43.216: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-23cc93ef0488bf7dc01b0e6fefef7cf0-fxrd7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4341' Apr 28 13:54:43.305: INFO: stderr: "" Apr 28 13:54:43.305: INFO: stdout: "docker.io/library/nginx:1.14-alpine" Apr 28 13:54:43.305: INFO: e2e-test-nginx-rc-23cc93ef0488bf7dc01b0e6fefef7cf0-fxrd7 is verified up and running [AfterEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1522 Apr 28 13:54:43.305: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-4341' Apr 28 13:54:43.409: INFO: stderr: "" Apr 28 13:54:43.409: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 13:54:43.409: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4341" for this suite. Apr 28 13:54:49.450: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:54:49.522: INFO: namespace kubectl-4341 deletion completed in 6.108808007s • [SLOW TEST:24.961 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 13:54:49.524: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir volume type on tmpfs Apr 28 13:54:49.611: INFO: Waiting up to 5m0s for pod "pod-9cf1c797-dc8b-423e-ad0e-0fcc0c939c53" in namespace "emptydir-7653" to be "success or failure" Apr 28 13:54:49.625: INFO: Pod "pod-9cf1c797-dc8b-423e-ad0e-0fcc0c939c53": Phase="Pending", Reason="", readiness=false. Elapsed: 14.073349ms Apr 28 13:54:51.629: INFO: Pod "pod-9cf1c797-dc8b-423e-ad0e-0fcc0c939c53": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018274098s Apr 28 13:54:53.634: INFO: Pod "pod-9cf1c797-dc8b-423e-ad0e-0fcc0c939c53": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022682097s STEP: Saw pod success Apr 28 13:54:53.634: INFO: Pod "pod-9cf1c797-dc8b-423e-ad0e-0fcc0c939c53" satisfied condition "success or failure" Apr 28 13:54:53.637: INFO: Trying to get logs from node iruya-worker2 pod pod-9cf1c797-dc8b-423e-ad0e-0fcc0c939c53 container test-container: STEP: delete the pod Apr 28 13:54:53.653: INFO: Waiting for pod pod-9cf1c797-dc8b-423e-ad0e-0fcc0c939c53 to disappear Apr 28 13:54:53.658: INFO: Pod pod-9cf1c797-dc8b-423e-ad0e-0fcc0c939c53 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 13:54:53.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7653" for this suite. Apr 28 13:54:59.674: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:54:59.752: INFO: namespace emptydir-7653 deletion completed in 6.091624505s • [SLOW TEST:10.228 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 13:54:59.752: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Apr 28 13:54:59.826: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-1743,SelfLink:/api/v1/namespaces/watch-1743/configmaps/e2e-watch-test-configmap-a,UID:c0c5cab8-a942-481a-9c1c-ec3bb72ac358,ResourceVersion:7906132,Generation:0,CreationTimestamp:2020-04-28 13:54:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Apr 28 13:54:59.826: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-1743,SelfLink:/api/v1/namespaces/watch-1743/configmaps/e2e-watch-test-configmap-a,UID:c0c5cab8-a942-481a-9c1c-ec3bb72ac358,ResourceVersion:7906132,Generation:0,CreationTimestamp:2020-04-28 13:54:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification Apr 28 13:55:09.836: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-1743,SelfLink:/api/v1/namespaces/watch-1743/configmaps/e2e-watch-test-configmap-a,UID:c0c5cab8-a942-481a-9c1c-ec3bb72ac358,ResourceVersion:7906152,Generation:0,CreationTimestamp:2020-04-28 13:54:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Apr 28 13:55:09.836: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-1743,SelfLink:/api/v1/namespaces/watch-1743/configmaps/e2e-watch-test-configmap-a,UID:c0c5cab8-a942-481a-9c1c-ec3bb72ac358,ResourceVersion:7906152,Generation:0,CreationTimestamp:2020-04-28 13:54:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Apr 28 13:55:19.845: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-1743,SelfLink:/api/v1/namespaces/watch-1743/configmaps/e2e-watch-test-configmap-a,UID:c0c5cab8-a942-481a-9c1c-ec3bb72ac358,ResourceVersion:7906172,Generation:0,CreationTimestamp:2020-04-28 13:54:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Apr 28 13:55:19.845: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-1743,SelfLink:/api/v1/namespaces/watch-1743/configmaps/e2e-watch-test-configmap-a,UID:c0c5cab8-a942-481a-9c1c-ec3bb72ac358,ResourceVersion:7906172,Generation:0,CreationTimestamp:2020-04-28 13:54:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification Apr 28 13:55:29.852: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-1743,SelfLink:/api/v1/namespaces/watch-1743/configmaps/e2e-watch-test-configmap-a,UID:c0c5cab8-a942-481a-9c1c-ec3bb72ac358,ResourceVersion:7906193,Generation:0,CreationTimestamp:2020-04-28 13:54:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Apr 28 13:55:29.852: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-1743,SelfLink:/api/v1/namespaces/watch-1743/configmaps/e2e-watch-test-configmap-a,UID:c0c5cab8-a942-481a-9c1c-ec3bb72ac358,ResourceVersion:7906193,Generation:0,CreationTimestamp:2020-04-28 13:54:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Apr 28 13:55:39.860: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-1743,SelfLink:/api/v1/namespaces/watch-1743/configmaps/e2e-watch-test-configmap-b,UID:8daad5e2-cf1d-4212-b9aa-25665b00de36,ResourceVersion:7906213,Generation:0,CreationTimestamp:2020-04-28 13:55:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Apr 28 13:55:39.861: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-1743,SelfLink:/api/v1/namespaces/watch-1743/configmaps/e2e-watch-test-configmap-b,UID:8daad5e2-cf1d-4212-b9aa-25665b00de36,ResourceVersion:7906213,Generation:0,CreationTimestamp:2020-04-28 13:55:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification Apr 28 13:55:49.867: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-1743,SelfLink:/api/v1/namespaces/watch-1743/configmaps/e2e-watch-test-configmap-b,UID:8daad5e2-cf1d-4212-b9aa-25665b00de36,ResourceVersion:7906234,Generation:0,CreationTimestamp:2020-04-28 13:55:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Apr 28 13:55:49.867: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-1743,SelfLink:/api/v1/namespaces/watch-1743/configmaps/e2e-watch-test-configmap-b,UID:8daad5e2-cf1d-4212-b9aa-25665b00de36,ResourceVersion:7906234,Generation:0,CreationTimestamp:2020-04-28 13:55:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 13:55:59.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-1743" for this suite. Apr 28 13:56:05.900: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:56:05.987: INFO: namespace watch-1743 deletion completed in 6.114306508s • [SLOW TEST:66.235 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 13:56:05.988: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-3b95c87e-89d6-4365-93ad-a8f653868597 STEP: Creating a pod to test consume configMaps Apr 28 13:56:06.103: INFO: Waiting up to 5m0s for pod "pod-configmaps-b5cb8c6d-b565-418e-985f-353a501a557a" in namespace "configmap-1968" to be "success or failure" Apr 28 13:56:06.108: INFO: Pod "pod-configmaps-b5cb8c6d-b565-418e-985f-353a501a557a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.93205ms Apr 28 13:56:08.157: INFO: Pod "pod-configmaps-b5cb8c6d-b565-418e-985f-353a501a557a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053613081s Apr 28 13:56:10.162: INFO: Pod "pod-configmaps-b5cb8c6d-b565-418e-985f-353a501a557a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.059502096s STEP: Saw pod success Apr 28 13:56:10.163: INFO: Pod "pod-configmaps-b5cb8c6d-b565-418e-985f-353a501a557a" satisfied condition "success or failure" Apr 28 13:56:10.165: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-b5cb8c6d-b565-418e-985f-353a501a557a container configmap-volume-test: STEP: delete the pod Apr 28 13:56:10.203: INFO: Waiting for pod pod-configmaps-b5cb8c6d-b565-418e-985f-353a501a557a to disappear Apr 28 13:56:10.215: INFO: Pod pod-configmaps-b5cb8c6d-b565-418e-985f-353a501a557a no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 13:56:10.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1968" for this suite. Apr 28 13:56:16.231: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:56:16.305: INFO: namespace configmap-1968 deletion completed in 6.085989011s • [SLOW TEST:10.318 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 13:56:16.306: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-1060, will wait for the garbage collector to delete the pods Apr 28 13:56:22.405: INFO: Deleting Job.batch foo took: 6.769982ms Apr 28 13:56:22.706: INFO: Terminating Job.batch foo pods took: 300.294756ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 13:57:02.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-1060" for this suite. Apr 28 13:57:08.256: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:57:08.332: INFO: namespace job-1060 deletion completed in 6.112400117s • [SLOW TEST:52.026 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 13:57:08.332: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap configmap-5318/configmap-test-172f8873-819e-4caf-8798-d1b6ec1536c3 STEP: Creating a pod to test consume configMaps Apr 28 13:57:08.404: INFO: Waiting up to 5m0s for pod "pod-configmaps-bc29e91a-0466-4c68-96b1-41d48054c287" in namespace "configmap-5318" to be "success or failure" Apr 28 13:57:08.408: INFO: Pod "pod-configmaps-bc29e91a-0466-4c68-96b1-41d48054c287": Phase="Pending", Reason="", readiness=false. Elapsed: 3.520402ms Apr 28 13:57:10.412: INFO: Pod "pod-configmaps-bc29e91a-0466-4c68-96b1-41d48054c287": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007757846s Apr 28 13:57:12.417: INFO: Pod "pod-configmaps-bc29e91a-0466-4c68-96b1-41d48054c287": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012383562s STEP: Saw pod success Apr 28 13:57:12.417: INFO: Pod "pod-configmaps-bc29e91a-0466-4c68-96b1-41d48054c287" satisfied condition "success or failure" Apr 28 13:57:12.420: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-bc29e91a-0466-4c68-96b1-41d48054c287 container env-test: STEP: delete the pod Apr 28 13:57:12.439: INFO: Waiting for pod pod-configmaps-bc29e91a-0466-4c68-96b1-41d48054c287 to disappear Apr 28 13:57:12.444: INFO: Pod pod-configmaps-bc29e91a-0466-4c68-96b1-41d48054c287 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 13:57:12.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5318" for this suite. Apr 28 13:57:18.466: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:57:18.554: INFO: namespace configmap-5318 deletion completed in 6.106596068s • [SLOW TEST:10.222 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 13:57:18.554: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-a537239f-2c10-4483-af71-f8412cde2d3f STEP: Creating a pod to test consume configMaps Apr 28 13:57:18.618: INFO: Waiting up to 5m0s for pod "pod-configmaps-b44f6109-03a5-4ae0-9f1b-2b1972ed4543" in namespace "configmap-3099" to be "success or failure" Apr 28 13:57:18.620: INFO: Pod "pod-configmaps-b44f6109-03a5-4ae0-9f1b-2b1972ed4543": Phase="Pending", Reason="", readiness=false. Elapsed: 2.295665ms Apr 28 13:57:20.624: INFO: Pod "pod-configmaps-b44f6109-03a5-4ae0-9f1b-2b1972ed4543": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006099717s Apr 28 13:57:22.629: INFO: Pod "pod-configmaps-b44f6109-03a5-4ae0-9f1b-2b1972ed4543": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.0107401s STEP: Saw pod success Apr 28 13:57:22.629: INFO: Pod "pod-configmaps-b44f6109-03a5-4ae0-9f1b-2b1972ed4543" satisfied condition "success or failure" Apr 28 13:57:22.632: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-b44f6109-03a5-4ae0-9f1b-2b1972ed4543 container configmap-volume-test: STEP: delete the pod Apr 28 13:57:22.667: INFO: Waiting for pod pod-configmaps-b44f6109-03a5-4ae0-9f1b-2b1972ed4543 to disappear Apr 28 13:57:22.676: INFO: Pod pod-configmaps-b44f6109-03a5-4ae0-9f1b-2b1972ed4543 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 13:57:22.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3099" for this suite. Apr 28 13:57:28.698: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:57:28.776: INFO: namespace configmap-3099 deletion completed in 6.096507271s • [SLOW TEST:10.222 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 13:57:28.776: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1612 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Apr 28 13:57:28.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-5838' Apr 28 13:57:28.936: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Apr 28 13:57:28.936: INFO: stdout: "job.batch/e2e-test-nginx-job created\n" STEP: verifying the job e2e-test-nginx-job was created [AfterEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1617 Apr 28 13:57:28.960: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=kubectl-5838' Apr 28 13:57:29.086: INFO: stderr: "" Apr 28 13:57:29.086: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 13:57:29.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5838" for this suite. Apr 28 13:57:35.105: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:57:35.175: INFO: namespace kubectl-5838 deletion completed in 6.085383884s • [SLOW TEST:6.399 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 13:57:35.175: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-projected-6qnb STEP: Creating a pod to test atomic-volume-subpath Apr 28 13:57:35.248: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-6qnb" in namespace "subpath-5540" to be "success or failure" Apr 28 13:57:35.289: INFO: Pod "pod-subpath-test-projected-6qnb": Phase="Pending", Reason="", readiness=false. Elapsed: 41.00158ms Apr 28 13:57:37.294: INFO: Pod "pod-subpath-test-projected-6qnb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045253201s Apr 28 13:57:39.297: INFO: Pod "pod-subpath-test-projected-6qnb": Phase="Running", Reason="", readiness=true. Elapsed: 4.048963686s Apr 28 13:57:41.301: INFO: Pod "pod-subpath-test-projected-6qnb": Phase="Running", Reason="", readiness=true. Elapsed: 6.05284447s Apr 28 13:57:43.306: INFO: Pod "pod-subpath-test-projected-6qnb": Phase="Running", Reason="", readiness=true. Elapsed: 8.057269923s Apr 28 13:57:45.312: INFO: Pod "pod-subpath-test-projected-6qnb": Phase="Running", Reason="", readiness=true. Elapsed: 10.063444862s Apr 28 13:57:47.316: INFO: Pod "pod-subpath-test-projected-6qnb": Phase="Running", Reason="", readiness=true. Elapsed: 12.068030811s Apr 28 13:57:49.319: INFO: Pod "pod-subpath-test-projected-6qnb": Phase="Running", Reason="", readiness=true. Elapsed: 14.070975155s Apr 28 13:57:51.324: INFO: Pod "pod-subpath-test-projected-6qnb": Phase="Running", Reason="", readiness=true. Elapsed: 16.075130767s Apr 28 13:57:53.328: INFO: Pod "pod-subpath-test-projected-6qnb": Phase="Running", Reason="", readiness=true. Elapsed: 18.079743114s Apr 28 13:57:55.333: INFO: Pod "pod-subpath-test-projected-6qnb": Phase="Running", Reason="", readiness=true. Elapsed: 20.08439874s Apr 28 13:57:57.337: INFO: Pod "pod-subpath-test-projected-6qnb": Phase="Running", Reason="", readiness=true. Elapsed: 22.088870379s Apr 28 13:57:59.341: INFO: Pod "pod-subpath-test-projected-6qnb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.092129461s STEP: Saw pod success Apr 28 13:57:59.341: INFO: Pod "pod-subpath-test-projected-6qnb" satisfied condition "success or failure" Apr 28 13:57:59.344: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-projected-6qnb container test-container-subpath-projected-6qnb: STEP: delete the pod Apr 28 13:57:59.393: INFO: Waiting for pod pod-subpath-test-projected-6qnb to disappear Apr 28 13:57:59.397: INFO: Pod pod-subpath-test-projected-6qnb no longer exists STEP: Deleting pod pod-subpath-test-projected-6qnb Apr 28 13:57:59.397: INFO: Deleting pod "pod-subpath-test-projected-6qnb" in namespace "subpath-5540" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 13:57:59.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-5540" for this suite. Apr 28 13:58:05.412: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:58:05.492: INFO: namespace subpath-5540 deletion completed in 6.088901148s • [SLOW TEST:30.316 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 13:58:05.492: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-edd87f29-ccc6-41e1-aa00-1bb20b7442ea STEP: Creating a pod to test consume configMaps Apr 28 13:58:05.560: INFO: Waiting up to 5m0s for pod "pod-configmaps-994cfeb0-2399-468e-b1be-505b08aed64b" in namespace "configmap-8377" to be "success or failure" Apr 28 13:58:05.581: INFO: Pod "pod-configmaps-994cfeb0-2399-468e-b1be-505b08aed64b": Phase="Pending", Reason="", readiness=false. Elapsed: 20.031237ms Apr 28 13:58:07.585: INFO: Pod "pod-configmaps-994cfeb0-2399-468e-b1be-505b08aed64b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024500734s Apr 28 13:58:09.715: INFO: Pod "pod-configmaps-994cfeb0-2399-468e-b1be-505b08aed64b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.154862313s STEP: Saw pod success Apr 28 13:58:09.715: INFO: Pod "pod-configmaps-994cfeb0-2399-468e-b1be-505b08aed64b" satisfied condition "success or failure" Apr 28 13:58:09.757: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-994cfeb0-2399-468e-b1be-505b08aed64b container configmap-volume-test: STEP: delete the pod Apr 28 13:58:10.033: INFO: Waiting for pod pod-configmaps-994cfeb0-2399-468e-b1be-505b08aed64b to disappear Apr 28 13:58:10.037: INFO: Pod pod-configmaps-994cfeb0-2399-468e-b1be-505b08aed64b no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 13:58:10.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8377" for this suite. Apr 28 13:58:16.077: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:58:16.159: INFO: namespace configmap-8377 deletion completed in 6.118773966s • [SLOW TEST:10.667 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 13:58:16.159: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:47 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes Apr 28 13:58:20.230: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0' STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Apr 28 13:58:35.330: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 13:58:35.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8686" for this suite. Apr 28 13:58:41.352: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:58:41.434: INFO: namespace pods-8686 deletion completed in 6.096596838s • [SLOW TEST:25.275 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 13:58:41.434: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 28 13:58:41.551: INFO: Pod name rollover-pod: Found 0 pods out of 1 Apr 28 13:58:46.555: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Apr 28 13:58:46.556: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Apr 28 13:58:48.560: INFO: Creating deployment "test-rollover-deployment" Apr 28 13:58:48.583: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Apr 28 13:58:50.589: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Apr 28 13:58:50.595: INFO: Ensure that both replica sets have 1 created replica Apr 28 13:58:50.610: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Apr 28 13:58:50.617: INFO: Updating deployment test-rollover-deployment Apr 28 13:58:50.617: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Apr 28 13:58:52.644: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Apr 28 13:58:52.650: INFO: Make sure deployment "test-rollover-deployment" is complete Apr 28 13:58:52.655: INFO: all replica sets need to contain the pod-template-hash label Apr 28 13:58:52.655: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723679128, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723679128, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723679130, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723679128, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 28 13:58:54.663: INFO: all replica sets need to contain the pod-template-hash label Apr 28 13:58:54.663: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723679128, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723679128, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723679133, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723679128, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 28 13:58:56.663: INFO: all replica sets need to contain the pod-template-hash label Apr 28 13:58:56.663: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723679128, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723679128, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723679133, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723679128, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 28 13:58:58.663: INFO: all replica sets need to contain the pod-template-hash label Apr 28 13:58:58.663: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723679128, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723679128, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723679133, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723679128, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 28 13:59:00.664: INFO: all replica sets need to contain the pod-template-hash label Apr 28 13:59:00.664: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723679128, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723679128, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723679133, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723679128, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 28 13:59:02.671: INFO: all replica sets need to contain the pod-template-hash label Apr 28 13:59:02.671: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723679128, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723679128, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723679133, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723679128, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 28 13:59:04.663: INFO: Apr 28 13:59:04.663: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Apr 28 13:59:04.671: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:deployment-6741,SelfLink:/apis/apps/v1/namespaces/deployment-6741/deployments/test-rollover-deployment,UID:afe2a232-a660-40b7-a91f-9b7f1dec9847,ResourceVersion:7906922,Generation:2,CreationTimestamp:2020-04-28 13:58:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-04-28 13:58:48 +0000 UTC 2020-04-28 13:58:48 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-04-28 13:59:03 +0000 UTC 2020-04-28 13:58:48 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-854595fc44" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Apr 28 13:59:04.675: INFO: New ReplicaSet "test-rollover-deployment-854595fc44" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44,GenerateName:,Namespace:deployment-6741,SelfLink:/apis/apps/v1/namespaces/deployment-6741/replicasets/test-rollover-deployment-854595fc44,UID:5ac65dc8-acc4-4070-880a-fb1f9a0b3fee,ResourceVersion:7906911,Generation:2,CreationTimestamp:2020-04-28 13:58:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment afe2a232-a660-40b7-a91f-9b7f1dec9847 0xc002166937 0xc002166938}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Apr 28 13:59:04.675: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Apr 28 13:59:04.675: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:deployment-6741,SelfLink:/apis/apps/v1/namespaces/deployment-6741/replicasets/test-rollover-controller,UID:a8a3c655-3b25-4d58-a5fb-7bd46796bfd6,ResourceVersion:7906920,Generation:2,CreationTimestamp:2020-04-28 13:58:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment afe2a232-a660-40b7-a91f-9b7f1dec9847 0xc002166867 0xc002166868}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Apr 28 13:59:04.675: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-9b8b997cf,GenerateName:,Namespace:deployment-6741,SelfLink:/apis/apps/v1/namespaces/deployment-6741/replicasets/test-rollover-deployment-9b8b997cf,UID:1643fd43-09aa-498f-bcff-87dee9501897,ResourceVersion:7906879,Generation:2,CreationTimestamp:2020-04-28 13:58:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment afe2a232-a660-40b7-a91f-9b7f1dec9847 0xc002166a00 0xc002166a01}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Apr 28 13:59:04.678: INFO: Pod "test-rollover-deployment-854595fc44-2n5g2" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44-2n5g2,GenerateName:test-rollover-deployment-854595fc44-,Namespace:deployment-6741,SelfLink:/api/v1/namespaces/deployment-6741/pods/test-rollover-deployment-854595fc44-2n5g2,UID:5ddcc2de-ce00-44f6-98a0-a738f73a0386,ResourceVersion:7906889,Generation:0,CreationTimestamp:2020-04-28 13:58:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-854595fc44 5ac65dc8-acc4-4070-880a-fb1f9a0b3fee 0xc0021676b7 0xc0021676b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bm7h9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bm7h9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-bm7h9 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002167730} {node.kubernetes.io/unreachable Exists NoExecute 0xc002167750}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 13:58:50 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 13:58:53 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 13:58:53 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 13:58:50 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.103,StartTime:2020-04-28 13:58:50 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-04-28 13:58:53 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://e108f9b8f26d0229afe3060a69e746f373eed6e5b5b42394dd5c2ed28515396e}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 13:59:04.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6741" for this suite. Apr 28 13:59:12.694: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 13:59:12.808: INFO: namespace deployment-6741 deletion completed in 8.125765432s • [SLOW TEST:31.373 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 13:59:12.808: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-59523a98-0c56-4261-9b91-952c276d85d2 in namespace container-probe-3463 Apr 28 13:59:16.900: INFO: Started pod liveness-59523a98-0c56-4261-9b91-952c276d85d2 in namespace container-probe-3463 STEP: checking the pod's current state and verifying that restartCount is present Apr 28 13:59:16.903: INFO: Initial restart count of pod liveness-59523a98-0c56-4261-9b91-952c276d85d2 is 0 Apr 28 13:59:32.939: INFO: Restart count of pod container-probe-3463/liveness-59523a98-0c56-4261-9b91-952c276d85d2 is now 1 (16.035706376s elapsed) Apr 28 13:59:53.009: INFO: Restart count of pod container-probe-3463/liveness-59523a98-0c56-4261-9b91-952c276d85d2 is now 2 (36.105938173s elapsed) Apr 28 14:00:13.086: INFO: Restart count of pod container-probe-3463/liveness-59523a98-0c56-4261-9b91-952c276d85d2 is now 3 (56.183157881s elapsed) Apr 28 14:00:33.128: INFO: Restart count of pod container-probe-3463/liveness-59523a98-0c56-4261-9b91-952c276d85d2 is now 4 (1m16.224455592s elapsed) Apr 28 14:01:43.344: INFO: Restart count of pod container-probe-3463/liveness-59523a98-0c56-4261-9b91-952c276d85d2 is now 5 (2m26.440811733s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 14:01:43.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3463" for this suite. Apr 28 14:01:49.380: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 14:01:49.478: INFO: namespace container-probe-3463 deletion completed in 6.119392098s • [SLOW TEST:156.670 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 14:01:49.479: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-2211 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 28 14:01:49.521: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Apr 28 14:02:11.671: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.47:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2211 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 28 14:02:11.671: INFO: >>> kubeConfig: /root/.kube/config I0428 14:02:11.708479 6 log.go:172] (0xc0019420b0) (0xc0016d6fa0) Create stream I0428 14:02:11.708525 6 log.go:172] (0xc0019420b0) (0xc0016d6fa0) Stream added, broadcasting: 1 I0428 14:02:11.713643 6 log.go:172] (0xc0019420b0) Reply frame received for 1 I0428 14:02:11.713726 6 log.go:172] (0xc0019420b0) (0xc000e975e0) Create stream I0428 14:02:11.713758 6 log.go:172] (0xc0019420b0) (0xc000e975e0) Stream added, broadcasting: 3 I0428 14:02:11.722508 6 log.go:172] (0xc0019420b0) Reply frame received for 3 I0428 14:02:11.722543 6 log.go:172] (0xc0019420b0) (0xc001202460) Create stream I0428 14:02:11.722555 6 log.go:172] (0xc0019420b0) (0xc001202460) Stream added, broadcasting: 5 I0428 14:02:11.724176 6 log.go:172] (0xc0019420b0) Reply frame received for 5 I0428 14:02:11.807132 6 log.go:172] (0xc0019420b0) Data frame received for 3 I0428 14:02:11.807181 6 log.go:172] (0xc000e975e0) (3) Data frame handling I0428 14:02:11.807220 6 log.go:172] (0xc000e975e0) (3) Data frame sent I0428 14:02:11.807322 6 log.go:172] (0xc0019420b0) Data frame received for 5 I0428 14:02:11.807341 6 log.go:172] (0xc001202460) (5) Data frame handling I0428 14:02:11.807744 6 log.go:172] (0xc0019420b0) Data frame received for 3 I0428 14:02:11.807760 6 log.go:172] (0xc000e975e0) (3) Data frame handling I0428 14:02:11.809979 6 log.go:172] (0xc0019420b0) Data frame received for 1 I0428 14:02:11.809997 6 log.go:172] (0xc0016d6fa0) (1) Data frame handling I0428 14:02:11.810005 6 log.go:172] (0xc0016d6fa0) (1) Data frame sent I0428 14:02:11.810014 6 log.go:172] (0xc0019420b0) (0xc0016d6fa0) Stream removed, broadcasting: 1 I0428 14:02:11.810026 6 log.go:172] (0xc0019420b0) Go away received I0428 14:02:11.810147 6 log.go:172] (0xc0019420b0) (0xc0016d6fa0) Stream removed, broadcasting: 1 I0428 14:02:11.810175 6 log.go:172] (0xc0019420b0) (0xc000e975e0) Stream removed, broadcasting: 3 I0428 14:02:11.810191 6 log.go:172] (0xc0019420b0) (0xc001202460) Stream removed, broadcasting: 5 Apr 28 14:02:11.810: INFO: Found all expected endpoints: [netserver-0] Apr 28 14:02:11.842: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.105:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2211 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 28 14:02:11.842: INFO: >>> kubeConfig: /root/.kube/config I0428 14:02:11.873886 6 log.go:172] (0xc0015244d0) (0xc001bf9220) Create stream I0428 14:02:11.873917 6 log.go:172] (0xc0015244d0) (0xc001bf9220) Stream added, broadcasting: 1 I0428 14:02:11.876020 6 log.go:172] (0xc0015244d0) Reply frame received for 1 I0428 14:02:11.876074 6 log.go:172] (0xc0015244d0) (0xc000e97720) Create stream I0428 14:02:11.876097 6 log.go:172] (0xc0015244d0) (0xc000e97720) Stream added, broadcasting: 3 I0428 14:02:11.877055 6 log.go:172] (0xc0015244d0) Reply frame received for 3 I0428 14:02:11.877095 6 log.go:172] (0xc0015244d0) (0xc001202780) Create stream I0428 14:02:11.877252 6 log.go:172] (0xc0015244d0) (0xc001202780) Stream added, broadcasting: 5 I0428 14:02:11.878202 6 log.go:172] (0xc0015244d0) Reply frame received for 5 I0428 14:02:11.953102 6 log.go:172] (0xc0015244d0) Data frame received for 3 I0428 14:02:11.953285 6 log.go:172] (0xc000e97720) (3) Data frame handling I0428 14:02:11.953310 6 log.go:172] (0xc000e97720) (3) Data frame sent I0428 14:02:11.953603 6 log.go:172] (0xc0015244d0) Data frame received for 5 I0428 14:02:11.953632 6 log.go:172] (0xc001202780) (5) Data frame handling I0428 14:02:11.953666 6 log.go:172] (0xc0015244d0) Data frame received for 3 I0428 14:02:11.953687 6 log.go:172] (0xc000e97720) (3) Data frame handling I0428 14:02:11.955043 6 log.go:172] (0xc0015244d0) Data frame received for 1 I0428 14:02:11.955101 6 log.go:172] (0xc001bf9220) (1) Data frame handling I0428 14:02:11.955132 6 log.go:172] (0xc001bf9220) (1) Data frame sent I0428 14:02:11.955157 6 log.go:172] (0xc0015244d0) (0xc001bf9220) Stream removed, broadcasting: 1 I0428 14:02:11.955178 6 log.go:172] (0xc0015244d0) Go away received I0428 14:02:11.955333 6 log.go:172] (0xc0015244d0) (0xc001bf9220) Stream removed, broadcasting: 1 I0428 14:02:11.955372 6 log.go:172] (0xc0015244d0) (0xc000e97720) Stream removed, broadcasting: 3 I0428 14:02:11.955392 6 log.go:172] (0xc0015244d0) (0xc001202780) Stream removed, broadcasting: 5 Apr 28 14:02:11.955: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 14:02:11.955: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-2211" for this suite. Apr 28 14:02:33.974: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 14:02:34.044: INFO: namespace pod-network-test-2211 deletion completed in 22.085712429s • [SLOW TEST:44.565 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 14:02:34.044: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: executing a command with run --rm and attach with stdin Apr 28 14:02:34.079: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9795 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Apr 28 14:02:37.542: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0428 14:02:37.484875 1285 log.go:172] (0xc000a2e000) (0xc000294140) Create stream\nI0428 14:02:37.484935 1285 log.go:172] (0xc000a2e000) (0xc000294140) Stream added, broadcasting: 1\nI0428 14:02:37.487469 1285 log.go:172] (0xc000a2e000) Reply frame received for 1\nI0428 14:02:37.487508 1285 log.go:172] (0xc000a2e000) (0xc000304000) Create stream\nI0428 14:02:37.487533 1285 log.go:172] (0xc000a2e000) (0xc000304000) Stream added, broadcasting: 3\nI0428 14:02:37.488607 1285 log.go:172] (0xc000a2e000) Reply frame received for 3\nI0428 14:02:37.488658 1285 log.go:172] (0xc000a2e000) (0xc0002941e0) Create stream\nI0428 14:02:37.488679 1285 log.go:172] (0xc000a2e000) (0xc0002941e0) Stream added, broadcasting: 5\nI0428 14:02:37.489649 1285 log.go:172] (0xc000a2e000) Reply frame received for 5\nI0428 14:02:37.489683 1285 log.go:172] (0xc000a2e000) (0xc0003040a0) Create stream\nI0428 14:02:37.489694 1285 log.go:172] (0xc000a2e000) (0xc0003040a0) Stream added, broadcasting: 7\nI0428 14:02:37.490575 1285 log.go:172] (0xc000a2e000) Reply frame received for 7\nI0428 14:02:37.490713 1285 log.go:172] (0xc000304000) (3) Writing data frame\nI0428 14:02:37.490835 1285 log.go:172] (0xc000304000) (3) Writing data frame\nI0428 14:02:37.491518 1285 log.go:172] (0xc000a2e000) Data frame received for 5\nI0428 14:02:37.491535 1285 log.go:172] (0xc0002941e0) (5) Data frame handling\nI0428 14:02:37.491549 1285 log.go:172] (0xc0002941e0) (5) Data frame sent\nI0428 14:02:37.492196 1285 log.go:172] (0xc000a2e000) Data frame received for 5\nI0428 14:02:37.492216 1285 log.go:172] (0xc0002941e0) (5) Data frame handling\nI0428 14:02:37.492231 1285 log.go:172] (0xc0002941e0) (5) Data frame sent\nI0428 14:02:37.519177 1285 log.go:172] (0xc000a2e000) Data frame received for 5\nI0428 14:02:37.519214 1285 log.go:172] (0xc0002941e0) (5) Data frame handling\nI0428 14:02:37.519281 1285 log.go:172] (0xc000a2e000) Data frame received for 7\nI0428 14:02:37.519336 1285 log.go:172] (0xc0003040a0) (7) Data frame handling\nI0428 14:02:37.519393 1285 log.go:172] (0xc000a2e000) Data frame received for 1\nI0428 14:02:37.519406 1285 log.go:172] (0xc000294140) (1) Data frame handling\nI0428 14:02:37.519415 1285 log.go:172] (0xc000294140) (1) Data frame sent\nI0428 14:02:37.519769 1285 log.go:172] (0xc000a2e000) (0xc000294140) Stream removed, broadcasting: 1\nI0428 14:02:37.519850 1285 log.go:172] (0xc000a2e000) (0xc000294140) Stream removed, broadcasting: 1\nI0428 14:02:37.519862 1285 log.go:172] (0xc000a2e000) (0xc000304000) Stream removed, broadcasting: 3\nI0428 14:02:37.519870 1285 log.go:172] (0xc000a2e000) (0xc0002941e0) Stream removed, broadcasting: 5\nI0428 14:02:37.519882 1285 log.go:172] (0xc000a2e000) (0xc0003040a0) Stream removed, broadcasting: 7\nI0428 14:02:37.520863 1285 log.go:172] (0xc000a2e000) (0xc000304000) Stream removed, broadcasting: 3\nI0428 14:02:37.520889 1285 log.go:172] (0xc000a2e000) Go away received\n" Apr 28 14:02:37.542: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 14:02:39.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9795" for this suite. Apr 28 14:02:45.578: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 14:02:45.660: INFO: namespace kubectl-9795 deletion completed in 6.106541464s • [SLOW TEST:11.615 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 14:02:45.660: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a replication controller Apr 28 14:02:45.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6398' Apr 28 14:02:46.039: INFO: stderr: "" Apr 28 14:02:46.039: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 28 14:02:46.039: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6398' Apr 28 14:02:46.146: INFO: stderr: "" Apr 28 14:02:46.146: INFO: stdout: "update-demo-nautilus-7ltz6 update-demo-nautilus-r88lh " Apr 28 14:02:46.147: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7ltz6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6398' Apr 28 14:02:46.236: INFO: stderr: "" Apr 28 14:02:46.236: INFO: stdout: "" Apr 28 14:02:46.236: INFO: update-demo-nautilus-7ltz6 is created but not running Apr 28 14:02:51.237: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6398' Apr 28 14:02:51.342: INFO: stderr: "" Apr 28 14:02:51.342: INFO: stdout: "update-demo-nautilus-7ltz6 update-demo-nautilus-r88lh " Apr 28 14:02:51.342: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7ltz6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6398' Apr 28 14:02:51.430: INFO: stderr: "" Apr 28 14:02:51.430: INFO: stdout: "true" Apr 28 14:02:51.430: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7ltz6 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6398' Apr 28 14:02:51.522: INFO: stderr: "" Apr 28 14:02:51.522: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 28 14:02:51.523: INFO: validating pod update-demo-nautilus-7ltz6 Apr 28 14:02:51.527: INFO: got data: { "image": "nautilus.jpg" } Apr 28 14:02:51.527: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 28 14:02:51.527: INFO: update-demo-nautilus-7ltz6 is verified up and running Apr 28 14:02:51.527: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-r88lh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6398' Apr 28 14:02:51.616: INFO: stderr: "" Apr 28 14:02:51.616: INFO: stdout: "true" Apr 28 14:02:51.616: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-r88lh -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6398' Apr 28 14:02:51.710: INFO: stderr: "" Apr 28 14:02:51.710: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 28 14:02:51.710: INFO: validating pod update-demo-nautilus-r88lh Apr 28 14:02:51.713: INFO: got data: { "image": "nautilus.jpg" } Apr 28 14:02:51.713: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 28 14:02:51.713: INFO: update-demo-nautilus-r88lh is verified up and running STEP: scaling down the replication controller Apr 28 14:02:51.715: INFO: scanned /root for discovery docs: Apr 28 14:02:51.715: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-6398' Apr 28 14:02:52.856: INFO: stderr: "" Apr 28 14:02:52.856: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 28 14:02:52.856: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6398' Apr 28 14:02:52.950: INFO: stderr: "" Apr 28 14:02:52.950: INFO: stdout: "update-demo-nautilus-7ltz6 update-demo-nautilus-r88lh " STEP: Replicas for name=update-demo: expected=1 actual=2 Apr 28 14:02:57.950: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6398' Apr 28 14:02:58.049: INFO: stderr: "" Apr 28 14:02:58.049: INFO: stdout: "update-demo-nautilus-7ltz6 update-demo-nautilus-r88lh " STEP: Replicas for name=update-demo: expected=1 actual=2 Apr 28 14:03:03.049: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6398' Apr 28 14:03:03.150: INFO: stderr: "" Apr 28 14:03:03.150: INFO: stdout: "update-demo-nautilus-r88lh " Apr 28 14:03:03.150: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-r88lh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6398' Apr 28 14:03:03.244: INFO: stderr: "" Apr 28 14:03:03.245: INFO: stdout: "true" Apr 28 14:03:03.245: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-r88lh -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6398' Apr 28 14:03:03.333: INFO: stderr: "" Apr 28 14:03:03.333: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 28 14:03:03.333: INFO: validating pod update-demo-nautilus-r88lh Apr 28 14:03:03.336: INFO: got data: { "image": "nautilus.jpg" } Apr 28 14:03:03.336: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 28 14:03:03.336: INFO: update-demo-nautilus-r88lh is verified up and running STEP: scaling up the replication controller Apr 28 14:03:03.338: INFO: scanned /root for discovery docs: Apr 28 14:03:03.338: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-6398' Apr 28 14:03:04.456: INFO: stderr: "" Apr 28 14:03:04.456: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 28 14:03:04.456: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6398' Apr 28 14:03:04.553: INFO: stderr: "" Apr 28 14:03:04.553: INFO: stdout: "update-demo-nautilus-hkr5q update-demo-nautilus-r88lh " Apr 28 14:03:04.553: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hkr5q -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6398' Apr 28 14:03:04.650: INFO: stderr: "" Apr 28 14:03:04.650: INFO: stdout: "" Apr 28 14:03:04.650: INFO: update-demo-nautilus-hkr5q is created but not running Apr 28 14:03:09.650: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6398' Apr 28 14:03:09.763: INFO: stderr: "" Apr 28 14:03:09.763: INFO: stdout: "update-demo-nautilus-hkr5q update-demo-nautilus-r88lh " Apr 28 14:03:09.763: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hkr5q -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6398' Apr 28 14:03:09.858: INFO: stderr: "" Apr 28 14:03:09.858: INFO: stdout: "true" Apr 28 14:03:09.858: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hkr5q -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6398' Apr 28 14:03:09.938: INFO: stderr: "" Apr 28 14:03:09.938: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 28 14:03:09.938: INFO: validating pod update-demo-nautilus-hkr5q Apr 28 14:03:09.942: INFO: got data: { "image": "nautilus.jpg" } Apr 28 14:03:09.942: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 28 14:03:09.942: INFO: update-demo-nautilus-hkr5q is verified up and running Apr 28 14:03:09.942: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-r88lh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6398' Apr 28 14:03:10.037: INFO: stderr: "" Apr 28 14:03:10.037: INFO: stdout: "true" Apr 28 14:03:10.037: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-r88lh -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6398' Apr 28 14:03:10.144: INFO: stderr: "" Apr 28 14:03:10.144: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 28 14:03:10.144: INFO: validating pod update-demo-nautilus-r88lh Apr 28 14:03:10.148: INFO: got data: { "image": "nautilus.jpg" } Apr 28 14:03:10.148: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 28 14:03:10.148: INFO: update-demo-nautilus-r88lh is verified up and running STEP: using delete to clean up resources Apr 28 14:03:10.148: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6398' Apr 28 14:03:10.264: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 28 14:03:10.264: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Apr 28 14:03:10.264: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-6398' Apr 28 14:03:10.394: INFO: stderr: "No resources found.\n" Apr 28 14:03:10.394: INFO: stdout: "" Apr 28 14:03:10.394: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-6398 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 28 14:03:10.508: INFO: stderr: "" Apr 28 14:03:10.508: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 14:03:10.509: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6398" for this suite. Apr 28 14:03:32.524: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 14:03:32.609: INFO: namespace kubectl-6398 deletion completed in 22.097032057s • [SLOW TEST:46.949 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 14:03:32.610: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-7b5dd5ed-7ce9-48fb-be31-3f27db1a6d5b STEP: Creating a pod to test consume secrets Apr 28 14:03:32.718: INFO: Waiting up to 5m0s for pod "pod-secrets-b7bb5082-e3e0-407a-8e84-9ade027bb766" in namespace "secrets-2083" to be "success or failure" Apr 28 14:03:32.726: INFO: Pod "pod-secrets-b7bb5082-e3e0-407a-8e84-9ade027bb766": Phase="Pending", Reason="", readiness=false. Elapsed: 8.095994ms Apr 28 14:03:34.756: INFO: Pod "pod-secrets-b7bb5082-e3e0-407a-8e84-9ade027bb766": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037653901s Apr 28 14:03:36.760: INFO: Pod "pod-secrets-b7bb5082-e3e0-407a-8e84-9ade027bb766": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.041551605s STEP: Saw pod success Apr 28 14:03:36.760: INFO: Pod "pod-secrets-b7bb5082-e3e0-407a-8e84-9ade027bb766" satisfied condition "success or failure" Apr 28 14:03:36.763: INFO: Trying to get logs from node iruya-worker pod pod-secrets-b7bb5082-e3e0-407a-8e84-9ade027bb766 container secret-volume-test: STEP: delete the pod Apr 28 14:03:36.813: INFO: Waiting for pod pod-secrets-b7bb5082-e3e0-407a-8e84-9ade027bb766 to disappear Apr 28 14:03:36.817: INFO: Pod pod-secrets-b7bb5082-e3e0-407a-8e84-9ade027bb766 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 14:03:36.817: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2083" for this suite. Apr 28 14:03:42.845: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 14:03:42.926: INFO: namespace secrets-2083 deletion completed in 6.106235221s • [SLOW TEST:10.316 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 14:03:42.926: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on node default medium Apr 28 14:03:43.005: INFO: Waiting up to 5m0s for pod "pod-b2cc1ed9-0448-4c8d-ba6c-932392dcd2b9" in namespace "emptydir-5414" to be "success or failure" Apr 28 14:03:43.008: INFO: Pod "pod-b2cc1ed9-0448-4c8d-ba6c-932392dcd2b9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.917072ms Apr 28 14:03:45.026: INFO: Pod "pod-b2cc1ed9-0448-4c8d-ba6c-932392dcd2b9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020140682s Apr 28 14:03:47.030: INFO: Pod "pod-b2cc1ed9-0448-4c8d-ba6c-932392dcd2b9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02464569s STEP: Saw pod success Apr 28 14:03:47.030: INFO: Pod "pod-b2cc1ed9-0448-4c8d-ba6c-932392dcd2b9" satisfied condition "success or failure" Apr 28 14:03:47.034: INFO: Trying to get logs from node iruya-worker pod pod-b2cc1ed9-0448-4c8d-ba6c-932392dcd2b9 container test-container: STEP: delete the pod Apr 28 14:03:47.061: INFO: Waiting for pod pod-b2cc1ed9-0448-4c8d-ba6c-932392dcd2b9 to disappear Apr 28 14:03:47.072: INFO: Pod pod-b2cc1ed9-0448-4c8d-ba6c-932392dcd2b9 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 14:03:47.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5414" for this suite. Apr 28 14:03:53.088: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 14:03:53.161: INFO: namespace emptydir-5414 deletion completed in 6.085195644s • [SLOW TEST:10.235 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 14:03:53.161: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod test-webserver-0f2ae46a-d08a-4400-93fc-d05d859a1389 in namespace container-probe-652 Apr 28 14:03:57.227: INFO: Started pod test-webserver-0f2ae46a-d08a-4400-93fc-d05d859a1389 in namespace container-probe-652 STEP: checking the pod's current state and verifying that restartCount is present Apr 28 14:03:57.230: INFO: Initial restart count of pod test-webserver-0f2ae46a-d08a-4400-93fc-d05d859a1389 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 14:07:58.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-652" for this suite. Apr 28 14:08:04.086: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 14:08:04.164: INFO: namespace container-probe-652 deletion completed in 6.122393228s • [SLOW TEST:251.003 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 14:08:04.164: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Apr 28 14:08:08.753: INFO: Successfully updated pod "pod-update-activedeadlineseconds-329181ef-a3c4-4336-a03c-e38a5af9479c" Apr 28 14:08:08.753: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-329181ef-a3c4-4336-a03c-e38a5af9479c" in namespace "pods-9564" to be "terminated due to deadline exceeded" Apr 28 14:08:08.774: INFO: Pod "pod-update-activedeadlineseconds-329181ef-a3c4-4336-a03c-e38a5af9479c": Phase="Running", Reason="", readiness=true. Elapsed: 20.61264ms Apr 28 14:08:10.778: INFO: Pod "pod-update-activedeadlineseconds-329181ef-a3c4-4336-a03c-e38a5af9479c": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.024951785s Apr 28 14:08:10.778: INFO: Pod "pod-update-activedeadlineseconds-329181ef-a3c4-4336-a03c-e38a5af9479c" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 14:08:10.778: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9564" for this suite. Apr 28 14:08:16.799: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 14:08:16.876: INFO: namespace pods-9564 deletion completed in 6.09323508s • [SLOW TEST:12.712 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 14:08:16.876: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1210 STEP: creating the pod Apr 28 14:08:16.941: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9591' Apr 28 14:08:19.577: INFO: stderr: "" Apr 28 14:08:19.577: INFO: stdout: "pod/pause created\n" Apr 28 14:08:19.577: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Apr 28 14:08:19.578: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-9591" to be "running and ready" Apr 28 14:08:19.595: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 17.761575ms Apr 28 14:08:21.600: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022288144s Apr 28 14:08:23.604: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.02682543s Apr 28 14:08:23.604: INFO: Pod "pause" satisfied condition "running and ready" Apr 28 14:08:23.604: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: adding the label testing-label with value testing-label-value to a pod Apr 28 14:08:23.605: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-9591' Apr 28 14:08:23.699: INFO: stderr: "" Apr 28 14:08:23.699: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Apr 28 14:08:23.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-9591' Apr 28 14:08:23.783: INFO: stderr: "" Apr 28 14:08:23.783: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s testing-label-value\n" STEP: removing the label testing-label of a pod Apr 28 14:08:23.784: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-9591' Apr 28 14:08:23.879: INFO: stderr: "" Apr 28 14:08:23.879: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Apr 28 14:08:23.880: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-9591' Apr 28 14:08:23.992: INFO: stderr: "" Apr 28 14:08:23.992: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s \n" [AfterEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1217 STEP: using delete to clean up resources Apr 28 14:08:23.992: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9591' Apr 28 14:08:24.142: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 28 14:08:24.142: INFO: stdout: "pod \"pause\" force deleted\n" Apr 28 14:08:24.142: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-9591' Apr 28 14:08:24.352: INFO: stderr: "No resources found.\n" Apr 28 14:08:24.352: INFO: stdout: "" Apr 28 14:08:24.352: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-9591 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 28 14:08:24.434: INFO: stderr: "" Apr 28 14:08:24.434: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 14:08:24.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9591" for this suite. Apr 28 14:08:30.454: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 14:08:30.538: INFO: namespace kubectl-9591 deletion completed in 6.100238023s • [SLOW TEST:13.662 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 14:08:30.538: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Apr 28 14:08:38.694: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 28 14:08:38.699: INFO: Pod pod-with-poststart-http-hook still exists Apr 28 14:08:40.699: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 28 14:08:40.704: INFO: Pod pod-with-poststart-http-hook still exists Apr 28 14:08:42.699: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 28 14:08:42.704: INFO: Pod pod-with-poststart-http-hook still exists Apr 28 14:08:44.699: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 28 14:08:44.704: INFO: Pod pod-with-poststart-http-hook still exists Apr 28 14:08:46.699: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 28 14:08:46.704: INFO: Pod pod-with-poststart-http-hook still exists Apr 28 14:08:48.699: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 28 14:08:48.704: INFO: Pod pod-with-poststart-http-hook still exists Apr 28 14:08:50.699: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 28 14:08:50.704: INFO: Pod pod-with-poststart-http-hook still exists Apr 28 14:08:52.699: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 28 14:08:52.703: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 14:08:52.704: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-1920" for this suite. Apr 28 14:09:14.758: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 14:09:14.836: INFO: namespace container-lifecycle-hook-1920 deletion completed in 22.128605673s • [SLOW TEST:44.298 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 14:09:14.836: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8161.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8161.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 28 14:09:20.936: INFO: DNS probes using dns-8161/dns-test-703360e4-9cfd-42cf-91c5-bc4496e2fe8d succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 14:09:20.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8161" for this suite. Apr 28 14:09:27.012: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 14:09:27.095: INFO: namespace dns-8161 deletion completed in 6.1145007s • [SLOW TEST:12.259 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 14:09:27.096: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the initial replication controller Apr 28 14:09:27.138: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-329' Apr 28 14:09:27.383: INFO: stderr: "" Apr 28 14:09:27.383: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 28 14:09:27.383: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-329' Apr 28 14:09:27.500: INFO: stderr: "" Apr 28 14:09:27.500: INFO: stdout: "update-demo-nautilus-58lvg update-demo-nautilus-k7p5w " Apr 28 14:09:27.500: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-58lvg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-329' Apr 28 14:09:27.593: INFO: stderr: "" Apr 28 14:09:27.593: INFO: stdout: "" Apr 28 14:09:27.593: INFO: update-demo-nautilus-58lvg is created but not running Apr 28 14:09:32.593: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-329' Apr 28 14:09:32.695: INFO: stderr: "" Apr 28 14:09:32.695: INFO: stdout: "update-demo-nautilus-58lvg update-demo-nautilus-k7p5w " Apr 28 14:09:32.695: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-58lvg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-329' Apr 28 14:09:32.815: INFO: stderr: "" Apr 28 14:09:32.815: INFO: stdout: "true" Apr 28 14:09:32.815: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-58lvg -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-329' Apr 28 14:09:32.919: INFO: stderr: "" Apr 28 14:09:32.919: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 28 14:09:32.919: INFO: validating pod update-demo-nautilus-58lvg Apr 28 14:09:32.923: INFO: got data: { "image": "nautilus.jpg" } Apr 28 14:09:32.923: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 28 14:09:32.923: INFO: update-demo-nautilus-58lvg is verified up and running Apr 28 14:09:32.923: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-k7p5w -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-329' Apr 28 14:09:33.015: INFO: stderr: "" Apr 28 14:09:33.015: INFO: stdout: "true" Apr 28 14:09:33.015: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-k7p5w -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-329' Apr 28 14:09:33.096: INFO: stderr: "" Apr 28 14:09:33.096: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 28 14:09:33.096: INFO: validating pod update-demo-nautilus-k7p5w Apr 28 14:09:33.100: INFO: got data: { "image": "nautilus.jpg" } Apr 28 14:09:33.100: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 28 14:09:33.100: INFO: update-demo-nautilus-k7p5w is verified up and running STEP: rolling-update to new replication controller Apr 28 14:09:33.102: INFO: scanned /root for discovery docs: Apr 28 14:09:33.102: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-329' Apr 28 14:09:55.661: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Apr 28 14:09:55.661: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 28 14:09:55.661: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-329' Apr 28 14:09:55.769: INFO: stderr: "" Apr 28 14:09:55.769: INFO: stdout: "update-demo-kitten-gg5sg update-demo-kitten-jmlx7 " Apr 28 14:09:55.769: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-gg5sg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-329' Apr 28 14:09:55.867: INFO: stderr: "" Apr 28 14:09:55.867: INFO: stdout: "true" Apr 28 14:09:55.867: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-gg5sg -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-329' Apr 28 14:09:55.961: INFO: stderr: "" Apr 28 14:09:55.961: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Apr 28 14:09:55.961: INFO: validating pod update-demo-kitten-gg5sg Apr 28 14:09:55.965: INFO: got data: { "image": "kitten.jpg" } Apr 28 14:09:55.965: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Apr 28 14:09:55.965: INFO: update-demo-kitten-gg5sg is verified up and running Apr 28 14:09:55.965: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-jmlx7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-329' Apr 28 14:09:56.054: INFO: stderr: "" Apr 28 14:09:56.054: INFO: stdout: "true" Apr 28 14:09:56.054: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-jmlx7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-329' Apr 28 14:09:56.140: INFO: stderr: "" Apr 28 14:09:56.140: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Apr 28 14:09:56.140: INFO: validating pod update-demo-kitten-jmlx7 Apr 28 14:09:56.144: INFO: got data: { "image": "kitten.jpg" } Apr 28 14:09:56.144: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Apr 28 14:09:56.144: INFO: update-demo-kitten-jmlx7 is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 14:09:56.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-329" for this suite. Apr 28 14:10:18.171: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 14:10:18.250: INFO: namespace kubectl-329 deletion completed in 22.10228332s • [SLOW TEST:51.154 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 14:10:18.251: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-5ef895cc-26c5-4bb2-92db-2fc83789f78e STEP: Creating a pod to test consume secrets Apr 28 14:10:18.389: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-e6982de3-4164-4613-be24-7c1cb163577b" in namespace "projected-2637" to be "success or failure" Apr 28 14:10:18.402: INFO: Pod "pod-projected-secrets-e6982de3-4164-4613-be24-7c1cb163577b": Phase="Pending", Reason="", readiness=false. Elapsed: 12.447121ms Apr 28 14:10:20.406: INFO: Pod "pod-projected-secrets-e6982de3-4164-4613-be24-7c1cb163577b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01667958s Apr 28 14:10:22.410: INFO: Pod "pod-projected-secrets-e6982de3-4164-4613-be24-7c1cb163577b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021154629s STEP: Saw pod success Apr 28 14:10:22.410: INFO: Pod "pod-projected-secrets-e6982de3-4164-4613-be24-7c1cb163577b" satisfied condition "success or failure" Apr 28 14:10:22.413: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-e6982de3-4164-4613-be24-7c1cb163577b container projected-secret-volume-test: STEP: delete the pod Apr 28 14:10:22.437: INFO: Waiting for pod pod-projected-secrets-e6982de3-4164-4613-be24-7c1cb163577b to disappear Apr 28 14:10:22.441: INFO: Pod pod-projected-secrets-e6982de3-4164-4613-be24-7c1cb163577b no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 14:10:22.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2637" for this suite. Apr 28 14:10:28.457: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 14:10:28.536: INFO: namespace projected-2637 deletion completed in 6.091484395s • [SLOW TEST:10.286 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 14:10:28.537: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Apr 28 14:10:28.623: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 28 14:10:28.639: INFO: Waiting for terminating namespaces to be deleted... Apr 28 14:10:28.642: INFO: Logging pods the kubelet thinks is on node iruya-worker before test Apr 28 14:10:28.646: INFO: kindnet-gwz5g from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Apr 28 14:10:28.646: INFO: Container kindnet-cni ready: true, restart count 0 Apr 28 14:10:28.646: INFO: kube-proxy-pmz4p from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Apr 28 14:10:28.646: INFO: Container kube-proxy ready: true, restart count 0 Apr 28 14:10:28.646: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test Apr 28 14:10:28.650: INFO: kube-proxy-vwbcj from kube-system started at 2020-03-15 18:24:42 +0000 UTC (1 container statuses recorded) Apr 28 14:10:28.650: INFO: Container kube-proxy ready: true, restart count 0 Apr 28 14:10:28.650: INFO: kindnet-mgd8b from kube-system started at 2020-03-15 18:24:43 +0000 UTC (1 container statuses recorded) Apr 28 14:10:28.650: INFO: Container kindnet-cni ready: true, restart count 0 Apr 28 14:10:28.650: INFO: coredns-5d4dd4b4db-gm7vr from kube-system started at 2020-03-15 18:24:52 +0000 UTC (1 container statuses recorded) Apr 28 14:10:28.650: INFO: Container coredns ready: true, restart count 0 Apr 28 14:10:28.650: INFO: coredns-5d4dd4b4db-6jcgz from kube-system started at 2020-03-15 18:24:54 +0000 UTC (1 container statuses recorded) Apr 28 14:10:28.650: INFO: Container coredns ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.160a0119bd735c81], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 14:10:29.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-9487" for this suite. Apr 28 14:10:35.691: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 14:10:35.767: INFO: namespace sched-pred-9487 deletion completed in 6.09248167s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:7.230 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 14:10:35.767: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-2f024857-b805-463b-a8e3-2f854aa397fc STEP: Creating a pod to test consume secrets Apr 28 14:10:35.869: INFO: Waiting up to 5m0s for pod "pod-secrets-fbec7ec5-ba78-47e1-b2d6-8aadc597749d" in namespace "secrets-2277" to be "success or failure" Apr 28 14:10:35.872: INFO: Pod "pod-secrets-fbec7ec5-ba78-47e1-b2d6-8aadc597749d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.584093ms Apr 28 14:10:37.876: INFO: Pod "pod-secrets-fbec7ec5-ba78-47e1-b2d6-8aadc597749d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006271019s Apr 28 14:10:39.880: INFO: Pod "pod-secrets-fbec7ec5-ba78-47e1-b2d6-8aadc597749d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010373931s STEP: Saw pod success Apr 28 14:10:39.880: INFO: Pod "pod-secrets-fbec7ec5-ba78-47e1-b2d6-8aadc597749d" satisfied condition "success or failure" Apr 28 14:10:39.882: INFO: Trying to get logs from node iruya-worker pod pod-secrets-fbec7ec5-ba78-47e1-b2d6-8aadc597749d container secret-volume-test: STEP: delete the pod Apr 28 14:10:39.900: INFO: Waiting for pod pod-secrets-fbec7ec5-ba78-47e1-b2d6-8aadc597749d to disappear Apr 28 14:10:39.910: INFO: Pod pod-secrets-fbec7ec5-ba78-47e1-b2d6-8aadc597749d no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 14:10:39.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2277" for this suite. Apr 28 14:10:45.939: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 14:10:46.018: INFO: namespace secrets-2277 deletion completed in 6.105033457s • [SLOW TEST:10.251 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 14:10:46.019: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating secret secrets-5834/secret-test-5edd6b8e-d7ba-49f3-93a3-e3b181dfab34 STEP: Creating a pod to test consume secrets Apr 28 14:10:46.087: INFO: Waiting up to 5m0s for pod "pod-configmaps-0ae3af4d-010b-4978-8350-86b9529af1d9" in namespace "secrets-5834" to be "success or failure" Apr 28 14:10:46.090: INFO: Pod "pod-configmaps-0ae3af4d-010b-4978-8350-86b9529af1d9": Phase="Pending", Reason="", readiness=false. Elapsed: 3.367969ms Apr 28 14:10:48.094: INFO: Pod "pod-configmaps-0ae3af4d-010b-4978-8350-86b9529af1d9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007765863s Apr 28 14:10:50.099: INFO: Pod "pod-configmaps-0ae3af4d-010b-4978-8350-86b9529af1d9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012319312s STEP: Saw pod success Apr 28 14:10:50.099: INFO: Pod "pod-configmaps-0ae3af4d-010b-4978-8350-86b9529af1d9" satisfied condition "success or failure" Apr 28 14:10:50.102: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-0ae3af4d-010b-4978-8350-86b9529af1d9 container env-test: STEP: delete the pod Apr 28 14:10:50.134: INFO: Waiting for pod pod-configmaps-0ae3af4d-010b-4978-8350-86b9529af1d9 to disappear Apr 28 14:10:50.144: INFO: Pod pod-configmaps-0ae3af4d-010b-4978-8350-86b9529af1d9 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 14:10:50.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5834" for this suite. Apr 28 14:10:56.184: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 14:10:56.259: INFO: namespace secrets-5834 deletion completed in 6.108580078s • [SLOW TEST:10.240 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 14:10:56.259: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-1be30d23-985c-4d6a-8527-50c7167c64d2 STEP: Creating configMap with name cm-test-opt-upd-23834a3a-0d13-49ef-8f17-92fb474309d5 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-1be30d23-985c-4d6a-8527-50c7167c64d2 STEP: Updating configmap cm-test-opt-upd-23834a3a-0d13-49ef-8f17-92fb474309d5 STEP: Creating configMap with name cm-test-opt-create-8d2a301e-fa5e-44f5-9e70-d5a9d3971f34 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 14:12:16.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5606" for this suite. Apr 28 14:12:38.987: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 14:12:39.062: INFO: namespace configmap-5606 deletion completed in 22.100635525s • [SLOW TEST:102.803 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 14:12:39.062: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-3563.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-3563.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3563.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-3563.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-3563.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3563.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 28 14:12:45.206: INFO: DNS probes using dns-3563/dns-test-dcaffc8a-35bc-48ec-9266-2198df191a5b succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 14:12:45.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3563" for this suite. Apr 28 14:12:51.281: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 14:12:51.364: INFO: namespace dns-3563 deletion completed in 6.102632658s • [SLOW TEST:12.302 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 14:12:51.365: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-c25134d3-6d44-42df-a0f9-208450f235aa STEP: Creating a pod to test consume secrets Apr 28 14:12:51.451: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-66a4330d-ff20-48ae-b921-fff05aaf1409" in namespace "projected-2280" to be "success or failure" Apr 28 14:12:51.470: INFO: Pod "pod-projected-secrets-66a4330d-ff20-48ae-b921-fff05aaf1409": Phase="Pending", Reason="", readiness=false. Elapsed: 18.804217ms Apr 28 14:12:53.590: INFO: Pod "pod-projected-secrets-66a4330d-ff20-48ae-b921-fff05aaf1409": Phase="Pending", Reason="", readiness=false. Elapsed: 2.138922602s Apr 28 14:12:55.626: INFO: Pod "pod-projected-secrets-66a4330d-ff20-48ae-b921-fff05aaf1409": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.174706703s STEP: Saw pod success Apr 28 14:12:55.626: INFO: Pod "pod-projected-secrets-66a4330d-ff20-48ae-b921-fff05aaf1409" satisfied condition "success or failure" Apr 28 14:12:55.629: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-66a4330d-ff20-48ae-b921-fff05aaf1409 container projected-secret-volume-test: STEP: delete the pod Apr 28 14:12:55.657: INFO: Waiting for pod pod-projected-secrets-66a4330d-ff20-48ae-b921-fff05aaf1409 to disappear Apr 28 14:12:55.678: INFO: Pod pod-projected-secrets-66a4330d-ff20-48ae-b921-fff05aaf1409 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 14:12:55.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2280" for this suite. Apr 28 14:13:01.694: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 14:13:01.776: INFO: namespace projected-2280 deletion completed in 6.094281371s • [SLOW TEST:10.411 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 14:13:01.776: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 28 14:13:01.859: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6ca1eafb-439c-42c4-8d1b-300fdb87e01c" in namespace "downward-api-5277" to be "success or failure" Apr 28 14:13:01.862: INFO: Pod "downwardapi-volume-6ca1eafb-439c-42c4-8d1b-300fdb87e01c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.731469ms Apr 28 14:13:03.866: INFO: Pod "downwardapi-volume-6ca1eafb-439c-42c4-8d1b-300fdb87e01c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006543482s Apr 28 14:13:05.870: INFO: Pod "downwardapi-volume-6ca1eafb-439c-42c4-8d1b-300fdb87e01c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01082784s STEP: Saw pod success Apr 28 14:13:05.870: INFO: Pod "downwardapi-volume-6ca1eafb-439c-42c4-8d1b-300fdb87e01c" satisfied condition "success or failure" Apr 28 14:13:05.873: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-6ca1eafb-439c-42c4-8d1b-300fdb87e01c container client-container: STEP: delete the pod Apr 28 14:13:05.890: INFO: Waiting for pod downwardapi-volume-6ca1eafb-439c-42c4-8d1b-300fdb87e01c to disappear Apr 28 14:13:05.894: INFO: Pod downwardapi-volume-6ca1eafb-439c-42c4-8d1b-300fdb87e01c no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 14:13:05.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5277" for this suite. Apr 28 14:13:11.910: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 14:13:11.992: INFO: namespace downward-api-5277 deletion completed in 6.095370571s • [SLOW TEST:10.216 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 14:13:11.992: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 14:13:38.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-8020" for this suite. Apr 28 14:13:44.216: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 14:13:44.299: INFO: namespace namespaces-8020 deletion completed in 6.095027003s STEP: Destroying namespace "nsdeletetest-383" for this suite. Apr 28 14:13:44.301: INFO: Namespace nsdeletetest-383 was already deleted STEP: Destroying namespace "nsdeletetest-3755" for this suite. Apr 28 14:13:50.319: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 14:13:50.394: INFO: namespace nsdeletetest-3755 deletion completed in 6.092271579s • [SLOW TEST:38.402 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 14:13:50.394: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-6224ba44-1a25-4db1-bc51-1c29d9f8dc43 STEP: Creating a pod to test consume secrets Apr 28 14:13:50.579: INFO: Waiting up to 5m0s for pod "pod-secrets-b1d2e4e1-b212-4c52-b56b-3373b98a75b6" in namespace "secrets-5371" to be "success or failure" Apr 28 14:13:50.583: INFO: Pod "pod-secrets-b1d2e4e1-b212-4c52-b56b-3373b98a75b6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.441353ms Apr 28 14:13:52.588: INFO: Pod "pod-secrets-b1d2e4e1-b212-4c52-b56b-3373b98a75b6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008752109s Apr 28 14:13:54.592: INFO: Pod "pod-secrets-b1d2e4e1-b212-4c52-b56b-3373b98a75b6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013182076s STEP: Saw pod success Apr 28 14:13:54.592: INFO: Pod "pod-secrets-b1d2e4e1-b212-4c52-b56b-3373b98a75b6" satisfied condition "success or failure" Apr 28 14:13:54.595: INFO: Trying to get logs from node iruya-worker pod pod-secrets-b1d2e4e1-b212-4c52-b56b-3373b98a75b6 container secret-volume-test: STEP: delete the pod Apr 28 14:13:54.670: INFO: Waiting for pod pod-secrets-b1d2e4e1-b212-4c52-b56b-3373b98a75b6 to disappear Apr 28 14:13:54.679: INFO: Pod pod-secrets-b1d2e4e1-b212-4c52-b56b-3373b98a75b6 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 14:13:54.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5371" for this suite. Apr 28 14:14:00.738: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 14:14:00.818: INFO: namespace secrets-5371 deletion completed in 6.135448607s • [SLOW TEST:10.424 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 14:14:00.818: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 14:14:04.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-5991" for this suite. Apr 28 14:14:54.940: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 14:14:55.021: INFO: namespace kubelet-test-5991 deletion completed in 50.095918523s • [SLOW TEST:54.203 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187 should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 14:14:55.022: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0428 14:15:06.191686 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 28 14:15:06.191: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 14:15:06.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4889" for this suite. Apr 28 14:15:14.216: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 14:15:14.288: INFO: namespace gc-4889 deletion completed in 8.094284905s • [SLOW TEST:19.267 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 14:15:14.289: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 28 14:15:14.790: INFO: Waiting up to 5m0s for pod "downwardapi-volume-be4e3f5f-2f7b-4979-abbe-5dc795202de5" in namespace "projected-7172" to be "success or failure" Apr 28 14:15:14.897: INFO: Pod "downwardapi-volume-be4e3f5f-2f7b-4979-abbe-5dc795202de5": Phase="Pending", Reason="", readiness=false. Elapsed: 107.725645ms Apr 28 14:15:16.902: INFO: Pod "downwardapi-volume-be4e3f5f-2f7b-4979-abbe-5dc795202de5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.112226285s Apr 28 14:15:18.906: INFO: Pod "downwardapi-volume-be4e3f5f-2f7b-4979-abbe-5dc795202de5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.11673244s STEP: Saw pod success Apr 28 14:15:18.906: INFO: Pod "downwardapi-volume-be4e3f5f-2f7b-4979-abbe-5dc795202de5" satisfied condition "success or failure" Apr 28 14:15:18.910: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-be4e3f5f-2f7b-4979-abbe-5dc795202de5 container client-container: STEP: delete the pod Apr 28 14:15:18.946: INFO: Waiting for pod downwardapi-volume-be4e3f5f-2f7b-4979-abbe-5dc795202de5 to disappear Apr 28 14:15:19.005: INFO: Pod downwardapi-volume-be4e3f5f-2f7b-4979-abbe-5dc795202de5 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 14:15:19.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7172" for this suite. Apr 28 14:15:25.044: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 14:15:25.120: INFO: namespace projected-7172 deletion completed in 6.111252753s • [SLOW TEST:10.831 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 14:15:25.120: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Apr 28 14:15:25.222: INFO: Waiting up to 5m0s for pod "downward-api-5ad9aabb-4a20-44bd-9043-8a03660ba9ce" in namespace "downward-api-5901" to be "success or failure" Apr 28 14:15:25.239: INFO: Pod "downward-api-5ad9aabb-4a20-44bd-9043-8a03660ba9ce": Phase="Pending", Reason="", readiness=false. Elapsed: 17.616196ms Apr 28 14:15:27.269: INFO: Pod "downward-api-5ad9aabb-4a20-44bd-9043-8a03660ba9ce": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046984081s Apr 28 14:15:29.273: INFO: Pod "downward-api-5ad9aabb-4a20-44bd-9043-8a03660ba9ce": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.051449476s STEP: Saw pod success Apr 28 14:15:29.273: INFO: Pod "downward-api-5ad9aabb-4a20-44bd-9043-8a03660ba9ce" satisfied condition "success or failure" Apr 28 14:15:29.276: INFO: Trying to get logs from node iruya-worker pod downward-api-5ad9aabb-4a20-44bd-9043-8a03660ba9ce container dapi-container: STEP: delete the pod Apr 28 14:15:29.294: INFO: Waiting for pod downward-api-5ad9aabb-4a20-44bd-9043-8a03660ba9ce to disappear Apr 28 14:15:29.311: INFO: Pod downward-api-5ad9aabb-4a20-44bd-9043-8a03660ba9ce no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 14:15:29.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5901" for this suite. Apr 28 14:15:35.326: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 14:15:35.404: INFO: namespace downward-api-5901 deletion completed in 6.089882527s • [SLOW TEST:10.284 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 14:15:35.404: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-25d9514a-93a8-4df5-934d-2cf25f2dcbea STEP: Creating a pod to test consume secrets Apr 28 14:15:35.480: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-6ea358ac-f346-483d-8c26-5058bfb39483" in namespace "projected-5576" to be "success or failure" Apr 28 14:15:35.498: INFO: Pod "pod-projected-secrets-6ea358ac-f346-483d-8c26-5058bfb39483": Phase="Pending", Reason="", readiness=false. Elapsed: 18.072257ms Apr 28 14:15:37.503: INFO: Pod "pod-projected-secrets-6ea358ac-f346-483d-8c26-5058bfb39483": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022671147s Apr 28 14:15:39.507: INFO: Pod "pod-projected-secrets-6ea358ac-f346-483d-8c26-5058bfb39483": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026454772s STEP: Saw pod success Apr 28 14:15:39.507: INFO: Pod "pod-projected-secrets-6ea358ac-f346-483d-8c26-5058bfb39483" satisfied condition "success or failure" Apr 28 14:15:39.509: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-6ea358ac-f346-483d-8c26-5058bfb39483 container projected-secret-volume-test: STEP: delete the pod Apr 28 14:15:39.528: INFO: Waiting for pod pod-projected-secrets-6ea358ac-f346-483d-8c26-5058bfb39483 to disappear Apr 28 14:15:39.532: INFO: Pod pod-projected-secrets-6ea358ac-f346-483d-8c26-5058bfb39483 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 14:15:39.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5576" for this suite. Apr 28 14:15:45.548: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 14:15:45.626: INFO: namespace projected-5576 deletion completed in 6.091409676s • [SLOW TEST:10.222 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 14:15:45.626: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Apr 28 14:15:45.660: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 14:15:52.856: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-5491" for this suite. Apr 28 14:16:14.895: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 14:16:14.972: INFO: namespace init-container-5491 deletion completed in 22.094549516s • [SLOW TEST:29.346 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 14:16:14.972: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 28 14:16:15.121: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ca781a97-bec2-4c62-85c1-7081b19705a9" in namespace "downward-api-1596" to be "success or failure" Apr 28 14:16:15.125: INFO: Pod "downwardapi-volume-ca781a97-bec2-4c62-85c1-7081b19705a9": Phase="Pending", Reason="", readiness=false. Elapsed: 3.895829ms Apr 28 14:16:17.129: INFO: Pod "downwardapi-volume-ca781a97-bec2-4c62-85c1-7081b19705a9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007731357s Apr 28 14:16:19.133: INFO: Pod "downwardapi-volume-ca781a97-bec2-4c62-85c1-7081b19705a9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011604748s STEP: Saw pod success Apr 28 14:16:19.133: INFO: Pod "downwardapi-volume-ca781a97-bec2-4c62-85c1-7081b19705a9" satisfied condition "success or failure" Apr 28 14:16:19.136: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-ca781a97-bec2-4c62-85c1-7081b19705a9 container client-container: STEP: delete the pod Apr 28 14:16:19.168: INFO: Waiting for pod downwardapi-volume-ca781a97-bec2-4c62-85c1-7081b19705a9 to disappear Apr 28 14:16:19.180: INFO: Pod downwardapi-volume-ca781a97-bec2-4c62-85c1-7081b19705a9 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 14:16:19.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1596" for this suite. Apr 28 14:16:25.224: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 14:16:25.345: INFO: namespace downward-api-1596 deletion completed in 6.161928306s • [SLOW TEST:10.373 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 14:16:25.346: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a replication controller Apr 28 14:16:25.379: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2880' Apr 28 14:16:25.647: INFO: stderr: "" Apr 28 14:16:25.647: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 28 14:16:25.647: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2880' Apr 28 14:16:25.807: INFO: stderr: "" Apr 28 14:16:25.807: INFO: stdout: "update-demo-nautilus-ph65g update-demo-nautilus-rtp97 " Apr 28 14:16:25.807: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ph65g -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2880' Apr 28 14:16:25.924: INFO: stderr: "" Apr 28 14:16:25.924: INFO: stdout: "" Apr 28 14:16:25.924: INFO: update-demo-nautilus-ph65g is created but not running Apr 28 14:16:30.924: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2880' Apr 28 14:16:31.026: INFO: stderr: "" Apr 28 14:16:31.026: INFO: stdout: "update-demo-nautilus-ph65g update-demo-nautilus-rtp97 " Apr 28 14:16:31.026: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ph65g -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2880' Apr 28 14:16:31.115: INFO: stderr: "" Apr 28 14:16:31.115: INFO: stdout: "true" Apr 28 14:16:31.115: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ph65g -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2880' Apr 28 14:16:31.207: INFO: stderr: "" Apr 28 14:16:31.207: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 28 14:16:31.207: INFO: validating pod update-demo-nautilus-ph65g Apr 28 14:16:31.211: INFO: got data: { "image": "nautilus.jpg" } Apr 28 14:16:31.211: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 28 14:16:31.211: INFO: update-demo-nautilus-ph65g is verified up and running Apr 28 14:16:31.211: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rtp97 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2880' Apr 28 14:16:31.303: INFO: stderr: "" Apr 28 14:16:31.303: INFO: stdout: "true" Apr 28 14:16:31.303: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rtp97 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2880' Apr 28 14:16:31.400: INFO: stderr: "" Apr 28 14:16:31.400: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 28 14:16:31.400: INFO: validating pod update-demo-nautilus-rtp97 Apr 28 14:16:31.404: INFO: got data: { "image": "nautilus.jpg" } Apr 28 14:16:31.404: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 28 14:16:31.404: INFO: update-demo-nautilus-rtp97 is verified up and running STEP: using delete to clean up resources Apr 28 14:16:31.404: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2880' Apr 28 14:16:31.514: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 28 14:16:31.514: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Apr 28 14:16:31.515: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-2880' Apr 28 14:16:31.616: INFO: stderr: "No resources found.\n" Apr 28 14:16:31.616: INFO: stdout: "" Apr 28 14:16:31.616: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-2880 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 28 14:16:31.716: INFO: stderr: "" Apr 28 14:16:31.716: INFO: stdout: "update-demo-nautilus-ph65g\nupdate-demo-nautilus-rtp97\n" Apr 28 14:16:32.216: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-2880' Apr 28 14:16:32.314: INFO: stderr: "No resources found.\n" Apr 28 14:16:32.314: INFO: stdout: "" Apr 28 14:16:32.314: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-2880 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 28 14:16:32.416: INFO: stderr: "" Apr 28 14:16:32.416: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 14:16:32.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2880" for this suite. Apr 28 14:16:54.531: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 14:16:54.625: INFO: namespace kubectl-2880 deletion completed in 22.205670508s • [SLOW TEST:29.280 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 14:16:54.626: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 28 14:16:54.722: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a2b4eabe-8c06-49a3-a1f1-ac97f717b894" in namespace "projected-4918" to be "success or failure" Apr 28 14:16:54.725: INFO: Pod "downwardapi-volume-a2b4eabe-8c06-49a3-a1f1-ac97f717b894": Phase="Pending", Reason="", readiness=false. Elapsed: 3.494866ms Apr 28 14:16:56.756: INFO: Pod "downwardapi-volume-a2b4eabe-8c06-49a3-a1f1-ac97f717b894": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034154598s Apr 28 14:16:58.760: INFO: Pod "downwardapi-volume-a2b4eabe-8c06-49a3-a1f1-ac97f717b894": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038187602s STEP: Saw pod success Apr 28 14:16:58.760: INFO: Pod "downwardapi-volume-a2b4eabe-8c06-49a3-a1f1-ac97f717b894" satisfied condition "success or failure" Apr 28 14:16:58.763: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-a2b4eabe-8c06-49a3-a1f1-ac97f717b894 container client-container: STEP: delete the pod Apr 28 14:16:58.828: INFO: Waiting for pod downwardapi-volume-a2b4eabe-8c06-49a3-a1f1-ac97f717b894 to disappear Apr 28 14:16:58.846: INFO: Pod downwardapi-volume-a2b4eabe-8c06-49a3-a1f1-ac97f717b894 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 14:16:58.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4918" for this suite. Apr 28 14:17:04.861: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 14:17:04.924: INFO: namespace projected-4918 deletion completed in 6.074929776s • [SLOW TEST:10.298 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 14:17:04.924: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Apr 28 14:17:09.517: INFO: Successfully updated pod "labelsupdate76f0684f-7f3e-40a7-a4b6-7eb409d5feca" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 14:17:13.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6668" for this suite. Apr 28 14:17:35.574: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 14:17:35.656: INFO: namespace downward-api-6668 deletion completed in 22.109531834s • [SLOW TEST:30.732 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 14:17:35.657: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 28 14:17:35.715: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 14:17:39.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8615" for this suite. Apr 28 14:18:25.927: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 14:18:26.007: INFO: namespace pods-8615 deletion completed in 46.094767277s • [SLOW TEST:50.351 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 14:18:26.007: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 28 14:18:26.089: INFO: Waiting up to 5m0s for pod "downwardapi-volume-41426509-d728-4fbe-899d-49e61ec7399d" in namespace "projected-3761" to be "success or failure" Apr 28 14:18:26.107: INFO: Pod "downwardapi-volume-41426509-d728-4fbe-899d-49e61ec7399d": Phase="Pending", Reason="", readiness=false. Elapsed: 17.301945ms Apr 28 14:18:28.116: INFO: Pod "downwardapi-volume-41426509-d728-4fbe-899d-49e61ec7399d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026489561s Apr 28 14:18:30.120: INFO: Pod "downwardapi-volume-41426509-d728-4fbe-899d-49e61ec7399d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030545306s STEP: Saw pod success Apr 28 14:18:30.120: INFO: Pod "downwardapi-volume-41426509-d728-4fbe-899d-49e61ec7399d" satisfied condition "success or failure" Apr 28 14:18:30.123: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-41426509-d728-4fbe-899d-49e61ec7399d container client-container: STEP: delete the pod Apr 28 14:18:30.141: INFO: Waiting for pod downwardapi-volume-41426509-d728-4fbe-899d-49e61ec7399d to disappear Apr 28 14:18:30.145: INFO: Pod downwardapi-volume-41426509-d728-4fbe-899d-49e61ec7399d no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 14:18:30.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3761" for this suite. Apr 28 14:18:36.161: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 14:18:36.244: INFO: namespace projected-3761 deletion completed in 6.094775658s • [SLOW TEST:10.237 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 14:18:36.245: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-32cc2e60-9036-4ba3-83ba-bdeb3900b094 STEP: Creating a pod to test consume secrets Apr 28 14:18:36.334: INFO: Waiting up to 5m0s for pod "pod-secrets-6f5258e9-b24d-44e8-aced-2ff183766a62" in namespace "secrets-1591" to be "success or failure" Apr 28 14:18:36.343: INFO: Pod "pod-secrets-6f5258e9-b24d-44e8-aced-2ff183766a62": Phase="Pending", Reason="", readiness=false. Elapsed: 9.334545ms Apr 28 14:18:38.347: INFO: Pod "pod-secrets-6f5258e9-b24d-44e8-aced-2ff183766a62": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012992526s Apr 28 14:18:40.351: INFO: Pod "pod-secrets-6f5258e9-b24d-44e8-aced-2ff183766a62": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01680401s STEP: Saw pod success Apr 28 14:18:40.351: INFO: Pod "pod-secrets-6f5258e9-b24d-44e8-aced-2ff183766a62" satisfied condition "success or failure" Apr 28 14:18:40.353: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-6f5258e9-b24d-44e8-aced-2ff183766a62 container secret-volume-test: STEP: delete the pod Apr 28 14:18:40.409: INFO: Waiting for pod pod-secrets-6f5258e9-b24d-44e8-aced-2ff183766a62 to disappear Apr 28 14:18:40.412: INFO: Pod pod-secrets-6f5258e9-b24d-44e8-aced-2ff183766a62 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 14:18:40.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1591" for this suite. Apr 28 14:18:46.426: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 14:18:46.505: INFO: namespace secrets-1591 deletion completed in 6.089255798s • [SLOW TEST:10.260 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 14:18:46.505: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 28 14:18:46.595: INFO: (0) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 4.909029ms) Apr 28 14:18:46.599: INFO: (1) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.042281ms) Apr 28 14:18:46.602: INFO: (2) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.584775ms) Apr 28 14:18:46.606: INFO: (3) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.339872ms) Apr 28 14:18:46.608: INFO: (4) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.933853ms) Apr 28 14:18:46.612: INFO: (5) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.009319ms) Apr 28 14:18:46.615: INFO: (6) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.486596ms) Apr 28 14:18:46.618: INFO: (7) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.176045ms) Apr 28 14:18:46.622: INFO: (8) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.528626ms) Apr 28 14:18:46.626: INFO: (9) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.875498ms) Apr 28 14:18:46.629: INFO: (10) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.208691ms) Apr 28 14:18:46.632: INFO: (11) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.889972ms) Apr 28 14:18:46.635: INFO: (12) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.119871ms) Apr 28 14:18:46.638: INFO: (13) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.856902ms) Apr 28 14:18:46.641: INFO: (14) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.145641ms) Apr 28 14:18:46.644: INFO: (15) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.976673ms) Apr 28 14:18:46.647: INFO: (16) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.149878ms) Apr 28 14:18:46.651: INFO: (17) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.207177ms) Apr 28 14:18:46.654: INFO: (18) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.482306ms) Apr 28 14:18:46.657: INFO: (19) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.280451ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 14:18:46.657: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-3558" for this suite. Apr 28 14:18:52.684: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 14:18:52.800: INFO: namespace proxy-3558 deletion completed in 6.139007288s • [SLOW TEST:6.294 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 14:18:52.800: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 28 14:18:52.878: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ee7faf25-6744-4286-9672-21396378e1ee" in namespace "projected-1263" to be "success or failure" Apr 28 14:18:52.884: INFO: Pod "downwardapi-volume-ee7faf25-6744-4286-9672-21396378e1ee": Phase="Pending", Reason="", readiness=false. Elapsed: 5.540075ms Apr 28 14:18:54.912: INFO: Pod "downwardapi-volume-ee7faf25-6744-4286-9672-21396378e1ee": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033678706s Apr 28 14:18:56.916: INFO: Pod "downwardapi-volume-ee7faf25-6744-4286-9672-21396378e1ee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037854741s STEP: Saw pod success Apr 28 14:18:56.916: INFO: Pod "downwardapi-volume-ee7faf25-6744-4286-9672-21396378e1ee" satisfied condition "success or failure" Apr 28 14:18:56.919: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-ee7faf25-6744-4286-9672-21396378e1ee container client-container: STEP: delete the pod Apr 28 14:18:56.939: INFO: Waiting for pod downwardapi-volume-ee7faf25-6744-4286-9672-21396378e1ee to disappear Apr 28 14:18:56.944: INFO: Pod downwardapi-volume-ee7faf25-6744-4286-9672-21396378e1ee no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 14:18:56.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1263" for this suite. Apr 28 14:19:02.959: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 14:19:03.032: INFO: namespace projected-1263 deletion completed in 6.084097271s • [SLOW TEST:10.232 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 14:19:03.033: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Apr 28 14:19:03.120: INFO: Waiting up to 5m0s for pod "downward-api-21a71f14-96e0-400f-a083-937e8ee0c5d0" in namespace "downward-api-8631" to be "success or failure" Apr 28 14:19:03.130: INFO: Pod "downward-api-21a71f14-96e0-400f-a083-937e8ee0c5d0": Phase="Pending", Reason="", readiness=false. Elapsed: 9.799316ms Apr 28 14:19:05.134: INFO: Pod "downward-api-21a71f14-96e0-400f-a083-937e8ee0c5d0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013596674s Apr 28 14:19:07.138: INFO: Pod "downward-api-21a71f14-96e0-400f-a083-937e8ee0c5d0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017606485s STEP: Saw pod success Apr 28 14:19:07.138: INFO: Pod "downward-api-21a71f14-96e0-400f-a083-937e8ee0c5d0" satisfied condition "success or failure" Apr 28 14:19:07.140: INFO: Trying to get logs from node iruya-worker pod downward-api-21a71f14-96e0-400f-a083-937e8ee0c5d0 container dapi-container: STEP: delete the pod Apr 28 14:19:07.219: INFO: Waiting for pod downward-api-21a71f14-96e0-400f-a083-937e8ee0c5d0 to disappear Apr 28 14:19:07.224: INFO: Pod downward-api-21a71f14-96e0-400f-a083-937e8ee0c5d0 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 14:19:07.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8631" for this suite. Apr 28 14:19:13.264: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 14:19:13.335: INFO: namespace downward-api-8631 deletion completed in 6.108728264s • [SLOW TEST:10.302 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 14:19:13.336: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating replication controller my-hostname-basic-2d43acf2-bea7-436d-bab2-567be0431795 Apr 28 14:19:13.412: INFO: Pod name my-hostname-basic-2d43acf2-bea7-436d-bab2-567be0431795: Found 0 pods out of 1 Apr 28 14:19:18.416: INFO: Pod name my-hostname-basic-2d43acf2-bea7-436d-bab2-567be0431795: Found 1 pods out of 1 Apr 28 14:19:18.416: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-2d43acf2-bea7-436d-bab2-567be0431795" are running Apr 28 14:19:18.418: INFO: Pod "my-hostname-basic-2d43acf2-bea7-436d-bab2-567be0431795-x45w7" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-28 14:19:13 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-28 14:19:16 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-28 14:19:16 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-28 14:19:13 +0000 UTC Reason: Message:}]) Apr 28 14:19:18.418: INFO: Trying to dial the pod Apr 28 14:19:23.427: INFO: Controller my-hostname-basic-2d43acf2-bea7-436d-bab2-567be0431795: Got expected result from replica 1 [my-hostname-basic-2d43acf2-bea7-436d-bab2-567be0431795-x45w7]: "my-hostname-basic-2d43acf2-bea7-436d-bab2-567be0431795-x45w7", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 14:19:23.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-9899" for this suite. Apr 28 14:19:29.444: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 14:19:29.556: INFO: namespace replication-controller-9899 deletion completed in 6.126957383s • [SLOW TEST:16.221 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 14:19:29.557: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 28 14:19:29.661: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7dbb3e78-e034-4bd0-a8a0-8d128a8b8b59" in namespace "downward-api-7083" to be "success or failure" Apr 28 14:19:29.664: INFO: Pod "downwardapi-volume-7dbb3e78-e034-4bd0-a8a0-8d128a8b8b59": Phase="Pending", Reason="", readiness=false. Elapsed: 2.425048ms Apr 28 14:19:31.763: INFO: Pod "downwardapi-volume-7dbb3e78-e034-4bd0-a8a0-8d128a8b8b59": Phase="Pending", Reason="", readiness=false. Elapsed: 2.102284068s Apr 28 14:19:33.768: INFO: Pod "downwardapi-volume-7dbb3e78-e034-4bd0-a8a0-8d128a8b8b59": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.1065534s STEP: Saw pod success Apr 28 14:19:33.768: INFO: Pod "downwardapi-volume-7dbb3e78-e034-4bd0-a8a0-8d128a8b8b59" satisfied condition "success or failure" Apr 28 14:19:33.771: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-7dbb3e78-e034-4bd0-a8a0-8d128a8b8b59 container client-container: STEP: delete the pod Apr 28 14:19:33.812: INFO: Waiting for pod downwardapi-volume-7dbb3e78-e034-4bd0-a8a0-8d128a8b8b59 to disappear Apr 28 14:19:33.837: INFO: Pod downwardapi-volume-7dbb3e78-e034-4bd0-a8a0-8d128a8b8b59 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 14:19:33.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7083" for this suite. Apr 28 14:19:39.873: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 14:19:39.950: INFO: namespace downward-api-7083 deletion completed in 6.108836173s • [SLOW TEST:10.394 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 14:19:39.950: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-3452 [It] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating stateful set ss in namespace statefulset-3452 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-3452 Apr 28 14:19:40.061: INFO: Found 0 stateful pods, waiting for 1 Apr 28 14:19:50.075: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Apr 28 14:19:50.078: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3452 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Apr 28 14:19:52.704: INFO: stderr: "I0428 14:19:52.578371 2530 log.go:172] (0xc000116e70) (0xc00010c780) Create stream\nI0428 14:19:52.578408 2530 log.go:172] (0xc000116e70) (0xc00010c780) Stream added, broadcasting: 1\nI0428 14:19:52.581626 2530 log.go:172] (0xc000116e70) Reply frame received for 1\nI0428 14:19:52.581676 2530 log.go:172] (0xc000116e70) (0xc00095c000) Create stream\nI0428 14:19:52.581694 2530 log.go:172] (0xc000116e70) (0xc00095c000) Stream added, broadcasting: 3\nI0428 14:19:52.582815 2530 log.go:172] (0xc000116e70) Reply frame received for 3\nI0428 14:19:52.582880 2530 log.go:172] (0xc000116e70) (0xc000a66000) Create stream\nI0428 14:19:52.582904 2530 log.go:172] (0xc000116e70) (0xc000a66000) Stream added, broadcasting: 5\nI0428 14:19:52.583906 2530 log.go:172] (0xc000116e70) Reply frame received for 5\nI0428 14:19:52.657006 2530 log.go:172] (0xc000116e70) Data frame received for 5\nI0428 14:19:52.657038 2530 log.go:172] (0xc000a66000) (5) Data frame handling\nI0428 14:19:52.657058 2530 log.go:172] (0xc000a66000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0428 14:19:52.695406 2530 log.go:172] (0xc000116e70) Data frame received for 3\nI0428 14:19:52.695435 2530 log.go:172] (0xc00095c000) (3) Data frame handling\nI0428 14:19:52.695448 2530 log.go:172] (0xc00095c000) (3) Data frame sent\nI0428 14:19:52.695866 2530 log.go:172] (0xc000116e70) Data frame received for 5\nI0428 14:19:52.695894 2530 log.go:172] (0xc000a66000) (5) Data frame handling\nI0428 14:19:52.695931 2530 log.go:172] (0xc000116e70) Data frame received for 3\nI0428 14:19:52.695979 2530 log.go:172] (0xc00095c000) (3) Data frame handling\nI0428 14:19:52.697998 2530 log.go:172] (0xc000116e70) Data frame received for 1\nI0428 14:19:52.698017 2530 log.go:172] (0xc00010c780) (1) Data frame handling\nI0428 14:19:52.698044 2530 log.go:172] (0xc00010c780) (1) Data frame sent\nI0428 14:19:52.698255 2530 log.go:172] (0xc000116e70) (0xc00010c780) Stream removed, broadcasting: 1\nI0428 14:19:52.698363 2530 log.go:172] (0xc000116e70) Go away received\nI0428 14:19:52.698741 2530 log.go:172] (0xc000116e70) (0xc00010c780) Stream removed, broadcasting: 1\nI0428 14:19:52.698766 2530 log.go:172] (0xc000116e70) (0xc00095c000) Stream removed, broadcasting: 3\nI0428 14:19:52.698778 2530 log.go:172] (0xc000116e70) (0xc000a66000) Stream removed, broadcasting: 5\n" Apr 28 14:19:52.704: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Apr 28 14:19:52.704: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Apr 28 14:19:52.708: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Apr 28 14:20:02.713: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 28 14:20:02.713: INFO: Waiting for statefulset status.replicas updated to 0 Apr 28 14:20:02.728: INFO: POD NODE PHASE GRACE CONDITIONS Apr 28 14:20:02.728: INFO: ss-0 iruya-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 14:19:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 14:19:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 14:19:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 14:19:40 +0000 UTC }] Apr 28 14:20:02.728: INFO: Apr 28 14:20:02.728: INFO: StatefulSet ss has not reached scale 3, at 1 Apr 28 14:20:03.733: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.996339427s Apr 28 14:20:04.777: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.991164359s Apr 28 14:20:05.782: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.946942573s Apr 28 14:20:06.787: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.942518617s Apr 28 14:20:07.792: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.936978231s Apr 28 14:20:08.797: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.9324958s Apr 28 14:20:09.802: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.927000685s Apr 28 14:20:10.807: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.922011738s Apr 28 14:20:11.830: INFO: Verifying statefulset ss doesn't scale past 3 for another 916.749336ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-3452 Apr 28 14:20:12.835: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3452 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 28 14:20:13.067: INFO: stderr: "I0428 14:20:12.962446 2561 log.go:172] (0xc000436420) (0xc0003e66e0) Create stream\nI0428 14:20:12.962598 2561 log.go:172] (0xc000436420) (0xc0003e66e0) Stream added, broadcasting: 1\nI0428 14:20:12.965717 2561 log.go:172] (0xc000436420) Reply frame received for 1\nI0428 14:20:12.965767 2561 log.go:172] (0xc000436420) (0xc0003e6780) Create stream\nI0428 14:20:12.965780 2561 log.go:172] (0xc000436420) (0xc0003e6780) Stream added, broadcasting: 3\nI0428 14:20:12.966904 2561 log.go:172] (0xc000436420) Reply frame received for 3\nI0428 14:20:12.966943 2561 log.go:172] (0xc000436420) (0xc0002a4500) Create stream\nI0428 14:20:12.966956 2561 log.go:172] (0xc000436420) (0xc0002a4500) Stream added, broadcasting: 5\nI0428 14:20:12.968155 2561 log.go:172] (0xc000436420) Reply frame received for 5\nI0428 14:20:13.061321 2561 log.go:172] (0xc000436420) Data frame received for 3\nI0428 14:20:13.061362 2561 log.go:172] (0xc0003e6780) (3) Data frame handling\nI0428 14:20:13.061377 2561 log.go:172] (0xc0003e6780) (3) Data frame sent\nI0428 14:20:13.061387 2561 log.go:172] (0xc000436420) Data frame received for 3\nI0428 14:20:13.061394 2561 log.go:172] (0xc0003e6780) (3) Data frame handling\nI0428 14:20:13.061443 2561 log.go:172] (0xc000436420) Data frame received for 5\nI0428 14:20:13.061494 2561 log.go:172] (0xc0002a4500) (5) Data frame handling\nI0428 14:20:13.061522 2561 log.go:172] (0xc0002a4500) (5) Data frame sent\nI0428 14:20:13.061543 2561 log.go:172] (0xc000436420) Data frame received for 5\nI0428 14:20:13.061558 2561 log.go:172] (0xc0002a4500) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0428 14:20:13.063133 2561 log.go:172] (0xc000436420) Data frame received for 1\nI0428 14:20:13.063147 2561 log.go:172] (0xc0003e66e0) (1) Data frame handling\nI0428 14:20:13.063154 2561 log.go:172] (0xc0003e66e0) (1) Data frame sent\nI0428 14:20:13.063165 2561 log.go:172] (0xc000436420) (0xc0003e66e0) Stream removed, broadcasting: 1\nI0428 14:20:13.063200 2561 log.go:172] (0xc000436420) Go away received\nI0428 14:20:13.063422 2561 log.go:172] (0xc000436420) (0xc0003e66e0) Stream removed, broadcasting: 1\nI0428 14:20:13.063434 2561 log.go:172] (0xc000436420) (0xc0003e6780) Stream removed, broadcasting: 3\nI0428 14:20:13.063441 2561 log.go:172] (0xc000436420) (0xc0002a4500) Stream removed, broadcasting: 5\n" Apr 28 14:20:13.067: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Apr 28 14:20:13.067: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Apr 28 14:20:13.067: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3452 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 28 14:20:13.263: INFO: stderr: "I0428 14:20:13.195422 2583 log.go:172] (0xc0006da9a0) (0xc0006f8820) Create stream\nI0428 14:20:13.195489 2583 log.go:172] (0xc0006da9a0) (0xc0006f8820) Stream added, broadcasting: 1\nI0428 14:20:13.198385 2583 log.go:172] (0xc0006da9a0) Reply frame received for 1\nI0428 14:20:13.198447 2583 log.go:172] (0xc0006da9a0) (0xc00095a000) Create stream\nI0428 14:20:13.198475 2583 log.go:172] (0xc0006da9a0) (0xc00095a000) Stream added, broadcasting: 3\nI0428 14:20:13.199582 2583 log.go:172] (0xc0006da9a0) Reply frame received for 3\nI0428 14:20:13.199616 2583 log.go:172] (0xc0006da9a0) (0xc00095a0a0) Create stream\nI0428 14:20:13.199628 2583 log.go:172] (0xc0006da9a0) (0xc00095a0a0) Stream added, broadcasting: 5\nI0428 14:20:13.200540 2583 log.go:172] (0xc0006da9a0) Reply frame received for 5\nI0428 14:20:13.258091 2583 log.go:172] (0xc0006da9a0) Data frame received for 3\nI0428 14:20:13.258135 2583 log.go:172] (0xc00095a000) (3) Data frame handling\nI0428 14:20:13.258150 2583 log.go:172] (0xc00095a000) (3) Data frame sent\nI0428 14:20:13.258171 2583 log.go:172] (0xc0006da9a0) Data frame received for 3\nI0428 14:20:13.258180 2583 log.go:172] (0xc00095a000) (3) Data frame handling\nI0428 14:20:13.258220 2583 log.go:172] (0xc0006da9a0) Data frame received for 5\nI0428 14:20:13.258244 2583 log.go:172] (0xc00095a0a0) (5) Data frame handling\nI0428 14:20:13.258259 2583 log.go:172] (0xc00095a0a0) (5) Data frame sent\nI0428 14:20:13.258267 2583 log.go:172] (0xc0006da9a0) Data frame received for 5\nI0428 14:20:13.258274 2583 log.go:172] (0xc00095a0a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0428 14:20:13.259611 2583 log.go:172] (0xc0006da9a0) Data frame received for 1\nI0428 14:20:13.259643 2583 log.go:172] (0xc0006f8820) (1) Data frame handling\nI0428 14:20:13.259660 2583 log.go:172] (0xc0006f8820) (1) Data frame sent\nI0428 14:20:13.259697 2583 log.go:172] (0xc0006da9a0) (0xc0006f8820) Stream removed, broadcasting: 1\nI0428 14:20:13.259824 2583 log.go:172] (0xc0006da9a0) Go away received\nI0428 14:20:13.260094 2583 log.go:172] (0xc0006da9a0) (0xc0006f8820) Stream removed, broadcasting: 1\nI0428 14:20:13.260121 2583 log.go:172] (0xc0006da9a0) (0xc00095a000) Stream removed, broadcasting: 3\nI0428 14:20:13.260137 2583 log.go:172] (0xc0006da9a0) (0xc00095a0a0) Stream removed, broadcasting: 5\n" Apr 28 14:20:13.263: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Apr 28 14:20:13.263: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Apr 28 14:20:13.263: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3452 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 28 14:20:13.470: INFO: stderr: "I0428 14:20:13.397285 2603 log.go:172] (0xc000a38370) (0xc0002c26e0) Create stream\nI0428 14:20:13.397344 2603 log.go:172] (0xc000a38370) (0xc0002c26e0) Stream added, broadcasting: 1\nI0428 14:20:13.401310 2603 log.go:172] (0xc000a38370) Reply frame received for 1\nI0428 14:20:13.401357 2603 log.go:172] (0xc000a38370) (0xc0006b0320) Create stream\nI0428 14:20:13.401369 2603 log.go:172] (0xc000a38370) (0xc0006b0320) Stream added, broadcasting: 3\nI0428 14:20:13.402261 2603 log.go:172] (0xc000a38370) Reply frame received for 3\nI0428 14:20:13.402293 2603 log.go:172] (0xc000a38370) (0xc0002c2000) Create stream\nI0428 14:20:13.402305 2603 log.go:172] (0xc000a38370) (0xc0002c2000) Stream added, broadcasting: 5\nI0428 14:20:13.403056 2603 log.go:172] (0xc000a38370) Reply frame received for 5\nI0428 14:20:13.464936 2603 log.go:172] (0xc000a38370) Data frame received for 5\nI0428 14:20:13.464983 2603 log.go:172] (0xc0002c2000) (5) Data frame handling\nI0428 14:20:13.464994 2603 log.go:172] (0xc0002c2000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0428 14:20:13.465010 2603 log.go:172] (0xc000a38370) Data frame received for 3\nI0428 14:20:13.465018 2603 log.go:172] (0xc0006b0320) (3) Data frame handling\nI0428 14:20:13.465032 2603 log.go:172] (0xc0006b0320) (3) Data frame sent\nI0428 14:20:13.465042 2603 log.go:172] (0xc000a38370) Data frame received for 3\nI0428 14:20:13.465053 2603 log.go:172] (0xc0006b0320) (3) Data frame handling\nI0428 14:20:13.465090 2603 log.go:172] (0xc000a38370) Data frame received for 5\nI0428 14:20:13.465207 2603 log.go:172] (0xc0002c2000) (5) Data frame handling\nI0428 14:20:13.466815 2603 log.go:172] (0xc000a38370) Data frame received for 1\nI0428 14:20:13.466834 2603 log.go:172] (0xc0002c26e0) (1) Data frame handling\nI0428 14:20:13.466846 2603 log.go:172] (0xc0002c26e0) (1) Data frame sent\nI0428 14:20:13.466863 2603 log.go:172] (0xc000a38370) (0xc0002c26e0) Stream removed, broadcasting: 1\nI0428 14:20:13.466883 2603 log.go:172] (0xc000a38370) Go away received\nI0428 14:20:13.467186 2603 log.go:172] (0xc000a38370) (0xc0002c26e0) Stream removed, broadcasting: 1\nI0428 14:20:13.467204 2603 log.go:172] (0xc000a38370) (0xc0006b0320) Stream removed, broadcasting: 3\nI0428 14:20:13.467212 2603 log.go:172] (0xc000a38370) (0xc0002c2000) Stream removed, broadcasting: 5\n" Apr 28 14:20:13.470: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Apr 28 14:20:13.470: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Apr 28 14:20:13.474: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Apr 28 14:20:23.479: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Apr 28 14:20:23.479: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Apr 28 14:20:23.479: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Apr 28 14:20:23.482: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3452 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Apr 28 14:20:23.723: INFO: stderr: "I0428 14:20:23.609965 2623 log.go:172] (0xc000116fd0) (0xc0005e0aa0) Create stream\nI0428 14:20:23.610017 2623 log.go:172] (0xc000116fd0) (0xc0005e0aa0) Stream added, broadcasting: 1\nI0428 14:20:23.614200 2623 log.go:172] (0xc000116fd0) Reply frame received for 1\nI0428 14:20:23.614257 2623 log.go:172] (0xc000116fd0) (0xc0005e0320) Create stream\nI0428 14:20:23.614274 2623 log.go:172] (0xc000116fd0) (0xc0005e0320) Stream added, broadcasting: 3\nI0428 14:20:23.615257 2623 log.go:172] (0xc000116fd0) Reply frame received for 3\nI0428 14:20:23.615290 2623 log.go:172] (0xc000116fd0) (0xc0006ac000) Create stream\nI0428 14:20:23.615305 2623 log.go:172] (0xc000116fd0) (0xc0006ac000) Stream added, broadcasting: 5\nI0428 14:20:23.616206 2623 log.go:172] (0xc000116fd0) Reply frame received for 5\nI0428 14:20:23.715133 2623 log.go:172] (0xc000116fd0) Data frame received for 5\nI0428 14:20:23.715197 2623 log.go:172] (0xc0006ac000) (5) Data frame handling\nI0428 14:20:23.715217 2623 log.go:172] (0xc0006ac000) (5) Data frame sent\nI0428 14:20:23.715233 2623 log.go:172] (0xc000116fd0) Data frame received for 5\nI0428 14:20:23.715250 2623 log.go:172] (0xc0006ac000) (5) Data frame handling\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0428 14:20:23.715285 2623 log.go:172] (0xc000116fd0) Data frame received for 3\nI0428 14:20:23.715324 2623 log.go:172] (0xc0005e0320) (3) Data frame handling\nI0428 14:20:23.715353 2623 log.go:172] (0xc0005e0320) (3) Data frame sent\nI0428 14:20:23.715371 2623 log.go:172] (0xc000116fd0) Data frame received for 3\nI0428 14:20:23.715386 2623 log.go:172] (0xc0005e0320) (3) Data frame handling\nI0428 14:20:23.717511 2623 log.go:172] (0xc000116fd0) Data frame received for 1\nI0428 14:20:23.717546 2623 log.go:172] (0xc0005e0aa0) (1) Data frame handling\nI0428 14:20:23.717569 2623 log.go:172] (0xc0005e0aa0) (1) Data frame sent\nI0428 14:20:23.717580 2623 log.go:172] (0xc000116fd0) (0xc0005e0aa0) Stream removed, broadcasting: 1\nI0428 14:20:23.717839 2623 log.go:172] (0xc000116fd0) Go away received\nI0428 14:20:23.717936 2623 log.go:172] (0xc000116fd0) (0xc0005e0aa0) Stream removed, broadcasting: 1\nI0428 14:20:23.717964 2623 log.go:172] (0xc000116fd0) (0xc0005e0320) Stream removed, broadcasting: 3\nI0428 14:20:23.717977 2623 log.go:172] (0xc000116fd0) (0xc0006ac000) Stream removed, broadcasting: 5\n" Apr 28 14:20:23.723: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Apr 28 14:20:23.723: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Apr 28 14:20:23.723: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3452 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Apr 28 14:20:23.959: INFO: stderr: "I0428 14:20:23.837905 2644 log.go:172] (0xc000116e70) (0xc00035c820) Create stream\nI0428 14:20:23.837971 2644 log.go:172] (0xc000116e70) (0xc00035c820) Stream added, broadcasting: 1\nI0428 14:20:23.851715 2644 log.go:172] (0xc000116e70) Reply frame received for 1\nI0428 14:20:23.851765 2644 log.go:172] (0xc000116e70) (0xc00035c000) Create stream\nI0428 14:20:23.851778 2644 log.go:172] (0xc000116e70) (0xc00035c000) Stream added, broadcasting: 3\nI0428 14:20:23.853377 2644 log.go:172] (0xc000116e70) Reply frame received for 3\nI0428 14:20:23.853416 2644 log.go:172] (0xc000116e70) (0xc000542140) Create stream\nI0428 14:20:23.853431 2644 log.go:172] (0xc000116e70) (0xc000542140) Stream added, broadcasting: 5\nI0428 14:20:23.858357 2644 log.go:172] (0xc000116e70) Reply frame received for 5\nI0428 14:20:23.925321 2644 log.go:172] (0xc000116e70) Data frame received for 5\nI0428 14:20:23.925363 2644 log.go:172] (0xc000542140) (5) Data frame handling\nI0428 14:20:23.925377 2644 log.go:172] (0xc000542140) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0428 14:20:23.952925 2644 log.go:172] (0xc000116e70) Data frame received for 3\nI0428 14:20:23.952959 2644 log.go:172] (0xc00035c000) (3) Data frame handling\nI0428 14:20:23.952983 2644 log.go:172] (0xc00035c000) (3) Data frame sent\nI0428 14:20:23.952994 2644 log.go:172] (0xc000116e70) Data frame received for 3\nI0428 14:20:23.953002 2644 log.go:172] (0xc00035c000) (3) Data frame handling\nI0428 14:20:23.953486 2644 log.go:172] (0xc000116e70) Data frame received for 5\nI0428 14:20:23.953554 2644 log.go:172] (0xc000542140) (5) Data frame handling\nI0428 14:20:23.955133 2644 log.go:172] (0xc000116e70) Data frame received for 1\nI0428 14:20:23.955154 2644 log.go:172] (0xc00035c820) (1) Data frame handling\nI0428 14:20:23.955169 2644 log.go:172] (0xc00035c820) (1) Data frame sent\nI0428 14:20:23.955186 2644 log.go:172] (0xc000116e70) (0xc00035c820) Stream removed, broadcasting: 1\nI0428 14:20:23.955201 2644 log.go:172] (0xc000116e70) Go away received\nI0428 14:20:23.955565 2644 log.go:172] (0xc000116e70) (0xc00035c820) Stream removed, broadcasting: 1\nI0428 14:20:23.955581 2644 log.go:172] (0xc000116e70) (0xc00035c000) Stream removed, broadcasting: 3\nI0428 14:20:23.955591 2644 log.go:172] (0xc000116e70) (0xc000542140) Stream removed, broadcasting: 5\n" Apr 28 14:20:23.959: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Apr 28 14:20:23.959: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Apr 28 14:20:23.960: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3452 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Apr 28 14:20:24.186: INFO: stderr: "I0428 14:20:24.083569 2663 log.go:172] (0xc0009a0370) (0xc0009ee6e0) Create stream\nI0428 14:20:24.083651 2663 log.go:172] (0xc0009a0370) (0xc0009ee6e0) Stream added, broadcasting: 1\nI0428 14:20:24.085989 2663 log.go:172] (0xc0009a0370) Reply frame received for 1\nI0428 14:20:24.086020 2663 log.go:172] (0xc0009a0370) (0xc00053a280) Create stream\nI0428 14:20:24.086028 2663 log.go:172] (0xc0009a0370) (0xc00053a280) Stream added, broadcasting: 3\nI0428 14:20:24.086881 2663 log.go:172] (0xc0009a0370) Reply frame received for 3\nI0428 14:20:24.086905 2663 log.go:172] (0xc0009a0370) (0xc0009ee780) Create stream\nI0428 14:20:24.086911 2663 log.go:172] (0xc0009a0370) (0xc0009ee780) Stream added, broadcasting: 5\nI0428 14:20:24.087599 2663 log.go:172] (0xc0009a0370) Reply frame received for 5\nI0428 14:20:24.150627 2663 log.go:172] (0xc0009a0370) Data frame received for 5\nI0428 14:20:24.150651 2663 log.go:172] (0xc0009ee780) (5) Data frame handling\nI0428 14:20:24.150664 2663 log.go:172] (0xc0009ee780) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0428 14:20:24.177629 2663 log.go:172] (0xc0009a0370) Data frame received for 3\nI0428 14:20:24.177671 2663 log.go:172] (0xc00053a280) (3) Data frame handling\nI0428 14:20:24.177719 2663 log.go:172] (0xc00053a280) (3) Data frame sent\nI0428 14:20:24.177739 2663 log.go:172] (0xc0009a0370) Data frame received for 3\nI0428 14:20:24.177755 2663 log.go:172] (0xc00053a280) (3) Data frame handling\nI0428 14:20:24.177975 2663 log.go:172] (0xc0009a0370) Data frame received for 5\nI0428 14:20:24.178002 2663 log.go:172] (0xc0009ee780) (5) Data frame handling\nI0428 14:20:24.179846 2663 log.go:172] (0xc0009a0370) Data frame received for 1\nI0428 14:20:24.179864 2663 log.go:172] (0xc0009ee6e0) (1) Data frame handling\nI0428 14:20:24.179881 2663 log.go:172] (0xc0009ee6e0) (1) Data frame sent\nI0428 14:20:24.179900 2663 log.go:172] (0xc0009a0370) (0xc0009ee6e0) Stream removed, broadcasting: 1\nI0428 14:20:24.180041 2663 log.go:172] (0xc0009a0370) Go away received\nI0428 14:20:24.180444 2663 log.go:172] (0xc0009a0370) (0xc0009ee6e0) Stream removed, broadcasting: 1\nI0428 14:20:24.180465 2663 log.go:172] (0xc0009a0370) (0xc00053a280) Stream removed, broadcasting: 3\nI0428 14:20:24.180488 2663 log.go:172] (0xc0009a0370) (0xc0009ee780) Stream removed, broadcasting: 5\n" Apr 28 14:20:24.186: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Apr 28 14:20:24.186: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Apr 28 14:20:24.186: INFO: Waiting for statefulset status.replicas updated to 0 Apr 28 14:20:24.207: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Apr 28 14:20:34.215: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 28 14:20:34.215: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Apr 28 14:20:34.215: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Apr 28 14:20:34.241: INFO: POD NODE PHASE GRACE CONDITIONS Apr 28 14:20:34.241: INFO: ss-0 iruya-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 14:19:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 14:20:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 14:20:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 14:19:40 +0000 UTC }] Apr 28 14:20:34.241: INFO: ss-1 iruya-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 14:20:02 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 14:20:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 14:20:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 14:20:02 +0000 UTC }] Apr 28 14:20:34.241: INFO: ss-2 iruya-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 14:20:02 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 14:20:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 14:20:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 14:20:02 +0000 UTC }] Apr 28 14:20:34.241: INFO: Apr 28 14:20:34.241: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 28 14:20:35.247: INFO: POD NODE PHASE GRACE CONDITIONS Apr 28 14:20:35.247: INFO: ss-0 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 14:19:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 14:20:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 14:20:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 14:19:40 +0000 UTC }] Apr 28 14:20:35.247: INFO: ss-1 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 14:20:02 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 14:20:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 14:20:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 14:20:02 +0000 UTC }] Apr 28 14:20:35.247: INFO: ss-2 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 14:20:02 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 14:20:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 14:20:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 14:20:02 +0000 UTC }] Apr 28 14:20:35.247: INFO: Apr 28 14:20:35.247: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 28 14:20:36.252: INFO: POD NODE PHASE GRACE CONDITIONS Apr 28 14:20:36.252: INFO: ss-0 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 14:19:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 14:20:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 14:20:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 14:19:40 +0000 UTC }] Apr 28 14:20:36.252: INFO: ss-1 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 14:20:02 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 14:20:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 14:20:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 14:20:02 +0000 UTC }] Apr 28 14:20:36.252: INFO: ss-2 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 14:20:02 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 14:20:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 14:20:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 14:20:02 +0000 UTC }] Apr 28 14:20:36.252: INFO: Apr 28 14:20:36.252: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 28 14:20:37.258: INFO: POD NODE PHASE GRACE CONDITIONS Apr 28 14:20:37.258: INFO: ss-0 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 14:19:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 14:20:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 14:20:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 14:19:40 +0000 UTC }] Apr 28 14:20:37.258: INFO: ss-1 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 14:20:02 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 14:20:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 14:20:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 14:20:02 +0000 UTC }] Apr 28 14:20:37.258: INFO: ss-2 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 14:20:02 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 14:20:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 14:20:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 14:20:02 +0000 UTC }] Apr 28 14:20:37.258: INFO: Apr 28 14:20:37.258: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 28 14:20:38.263: INFO: POD NODE PHASE GRACE CONDITIONS Apr 28 14:20:38.263: INFO: ss-0 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 14:19:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 14:20:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 14:20:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 14:19:40 +0000 UTC }] Apr 28 14:20:38.263: INFO: ss-1 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 14:20:02 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 14:20:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 14:20:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 14:20:02 +0000 UTC }] Apr 28 14:20:38.263: INFO: ss-2 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 14:20:02 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 14:20:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 14:20:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 14:20:02 +0000 UTC }] Apr 28 14:20:38.263: INFO: Apr 28 14:20:38.263: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 28 14:20:39.267: INFO: POD NODE PHASE GRACE CONDITIONS Apr 28 14:20:39.267: INFO: ss-0 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 14:19:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 14:20:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 14:20:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 14:19:40 +0000 UTC }] Apr 28 14:20:39.267: INFO: ss-1 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 14:20:02 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 14:20:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 14:20:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 14:20:02 +0000 UTC }] Apr 28 14:20:39.267: INFO: ss-2 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 14:20:02 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 14:20:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 14:20:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 14:20:02 +0000 UTC }] Apr 28 14:20:39.267: INFO: Apr 28 14:20:39.267: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 28 14:20:40.279: INFO: POD NODE PHASE GRACE CONDITIONS Apr 28 14:20:40.279: INFO: ss-0 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 14:19:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 14:20:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 14:20:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 14:19:40 +0000 UTC }] Apr 28 14:20:40.279: INFO: ss-1 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 14:20:02 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 14:20:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 14:20:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 14:20:02 +0000 UTC }] Apr 28 14:20:40.279: INFO: ss-2 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 14:20:02 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 14:20:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 14:20:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 14:20:02 +0000 UTC }] Apr 28 14:20:40.279: INFO: Apr 28 14:20:40.279: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 28 14:20:41.327: INFO: POD NODE PHASE GRACE CONDITIONS Apr 28 14:20:41.327: INFO: ss-0 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 14:19:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 14:20:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 14:20:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 14:19:40 +0000 UTC }] Apr 28 14:20:41.327: INFO: ss-1 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 14:20:02 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 14:20:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 14:20:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 14:20:02 +0000 UTC }] Apr 28 14:20:41.327: INFO: ss-2 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 14:20:02 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 14:20:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-28 14:20:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 14:20:02 +0000 UTC }] Apr 28 14:20:41.327: INFO: Apr 28 14:20:41.327: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 28 14:20:42.331: INFO: Verifying statefulset ss doesn't scale past 0 for another 1.895363103s Apr 28 14:20:43.335: INFO: Verifying statefulset ss doesn't scale past 0 for another 891.086221ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-3452 Apr 28 14:20:44.339: INFO: Scaling statefulset ss to 0 Apr 28 14:20:44.349: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Apr 28 14:20:44.351: INFO: Deleting all statefulset in ns statefulset-3452 Apr 28 14:20:44.354: INFO: Scaling statefulset ss to 0 Apr 28 14:20:44.361: INFO: Waiting for statefulset status.replicas updated to 0 Apr 28 14:20:44.363: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 14:20:44.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3452" for this suite. Apr 28 14:20:50.397: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 14:20:50.478: INFO: namespace statefulset-3452 deletion completed in 6.096856654s • [SLOW TEST:70.527 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 14:20:50.479: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1292 STEP: creating an rc Apr 28 14:20:50.531: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7890' Apr 28 14:20:50.825: INFO: stderr: "" Apr 28 14:20:50.825: INFO: stdout: "replicationcontroller/redis-master created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Waiting for Redis master to start. Apr 28 14:20:51.829: INFO: Selector matched 1 pods for map[app:redis] Apr 28 14:20:51.829: INFO: Found 0 / 1 Apr 28 14:20:52.845: INFO: Selector matched 1 pods for map[app:redis] Apr 28 14:20:52.845: INFO: Found 0 / 1 Apr 28 14:20:53.829: INFO: Selector matched 1 pods for map[app:redis] Apr 28 14:20:53.829: INFO: Found 1 / 1 Apr 28 14:20:53.829: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Apr 28 14:20:53.832: INFO: Selector matched 1 pods for map[app:redis] Apr 28 14:20:53.832: INFO: ForEach: Found 1 pods from the filter. Now looping through them. STEP: checking for a matching strings Apr 28 14:20:53.832: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-7r45x redis-master --namespace=kubectl-7890' Apr 28 14:20:53.941: INFO: stderr: "" Apr 28 14:20:53.941: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 28 Apr 14:20:53.250 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 28 Apr 14:20:53.250 # Server started, Redis version 3.2.12\n1:M 28 Apr 14:20:53.250 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 28 Apr 14:20:53.250 * The server is now ready to accept connections on port 6379\n" STEP: limiting log lines Apr 28 14:20:53.941: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-7r45x redis-master --namespace=kubectl-7890 --tail=1' Apr 28 14:20:54.047: INFO: stderr: "" Apr 28 14:20:54.047: INFO: stdout: "1:M 28 Apr 14:20:53.250 * The server is now ready to accept connections on port 6379\n" STEP: limiting log bytes Apr 28 14:20:54.047: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-7r45x redis-master --namespace=kubectl-7890 --limit-bytes=1' Apr 28 14:20:54.163: INFO: stderr: "" Apr 28 14:20:54.163: INFO: stdout: " " STEP: exposing timestamps Apr 28 14:20:54.163: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-7r45x redis-master --namespace=kubectl-7890 --tail=1 --timestamps' Apr 28 14:20:54.283: INFO: stderr: "" Apr 28 14:20:54.283: INFO: stdout: "2020-04-28T14:20:53.251075248Z 1:M 28 Apr 14:20:53.250 * The server is now ready to accept connections on port 6379\n" STEP: restricting to a time range Apr 28 14:20:56.783: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-7r45x redis-master --namespace=kubectl-7890 --since=1s' Apr 28 14:20:56.900: INFO: stderr: "" Apr 28 14:20:56.900: INFO: stdout: "" Apr 28 14:20:56.900: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-7r45x redis-master --namespace=kubectl-7890 --since=24h' Apr 28 14:20:57.001: INFO: stderr: "" Apr 28 14:20:57.001: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 28 Apr 14:20:53.250 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 28 Apr 14:20:53.250 # Server started, Redis version 3.2.12\n1:M 28 Apr 14:20:53.250 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 28 Apr 14:20:53.250 * The server is now ready to accept connections on port 6379\n" [AfterEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298 STEP: using delete to clean up resources Apr 28 14:20:57.001: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7890' Apr 28 14:20:57.093: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 28 14:20:57.094: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n" Apr 28 14:20:57.094: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=kubectl-7890' Apr 28 14:20:57.191: INFO: stderr: "No resources found.\n" Apr 28 14:20:57.191: INFO: stdout: "" Apr 28 14:20:57.191: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=kubectl-7890 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 28 14:20:57.276: INFO: stderr: "" Apr 28 14:20:57.276: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 14:20:57.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7890" for this suite. Apr 28 14:21:03.291: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 14:21:03.371: INFO: namespace kubectl-7890 deletion completed in 6.092283463s • [SLOW TEST:12.892 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 14:21:03.371: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 14:21:03.458: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3036" for this suite. Apr 28 14:21:09.477: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 14:21:09.559: INFO: namespace services-3036 deletion completed in 6.092838983s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:6.188 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 14:21:09.560: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test use defaults Apr 28 14:21:09.643: INFO: Waiting up to 5m0s for pod "client-containers-102c388d-7e28-488b-a379-30283b9a7e29" in namespace "containers-2031" to be "success or failure" Apr 28 14:21:09.674: INFO: Pod "client-containers-102c388d-7e28-488b-a379-30283b9a7e29": Phase="Pending", Reason="", readiness=false. Elapsed: 31.414226ms Apr 28 14:21:11.679: INFO: Pod "client-containers-102c388d-7e28-488b-a379-30283b9a7e29": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03558835s Apr 28 14:21:13.683: INFO: Pod "client-containers-102c388d-7e28-488b-a379-30283b9a7e29": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03995884s STEP: Saw pod success Apr 28 14:21:13.683: INFO: Pod "client-containers-102c388d-7e28-488b-a379-30283b9a7e29" satisfied condition "success or failure" Apr 28 14:21:13.686: INFO: Trying to get logs from node iruya-worker pod client-containers-102c388d-7e28-488b-a379-30283b9a7e29 container test-container: STEP: delete the pod Apr 28 14:21:13.707: INFO: Waiting for pod client-containers-102c388d-7e28-488b-a379-30283b9a7e29 to disappear Apr 28 14:21:13.735: INFO: Pod client-containers-102c388d-7e28-488b-a379-30283b9a7e29 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 14:21:13.735: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-2031" for this suite. Apr 28 14:21:19.751: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 14:21:19.870: INFO: namespace containers-2031 deletion completed in 6.128715236s • [SLOW TEST:10.311 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 14:21:19.872: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 28 14:21:19.904: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Apr 28 14:21:19.955: INFO: Pod name sample-pod: Found 0 pods out of 1 Apr 28 14:21:24.960: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Apr 28 14:21:24.960: INFO: Creating deployment "test-rolling-update-deployment" Apr 28 14:21:24.964: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Apr 28 14:21:24.971: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Apr 28 14:21:26.978: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Apr 28 14:21:26.980: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723680485, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723680485, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723680485, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723680484, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 28 14:21:28.984: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Apr 28 14:21:28.993: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:deployment-8625,SelfLink:/apis/apps/v1/namespaces/deployment-8625/deployments/test-rolling-update-deployment,UID:7a1e10ec-161d-4796-a990-2a7883ef8705,ResourceVersion:7911343,Generation:1,CreationTimestamp:2020-04-28 14:21:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-04-28 14:21:25 +0000 UTC 2020-04-28 14:21:25 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-04-28 14:21:27 +0000 UTC 2020-04-28 14:21:24 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-79f6b9d75c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Apr 28 14:21:28.997: INFO: New ReplicaSet "test-rolling-update-deployment-79f6b9d75c" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c,GenerateName:,Namespace:deployment-8625,SelfLink:/apis/apps/v1/namespaces/deployment-8625/replicasets/test-rolling-update-deployment-79f6b9d75c,UID:6c3330bd-c587-4033-a922-3389c8f39749,ResourceVersion:7911331,Generation:1,CreationTimestamp:2020-04-28 14:21:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 7a1e10ec-161d-4796-a990-2a7883ef8705 0xc002911117 0xc002911118}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Apr 28 14:21:28.997: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Apr 28 14:21:28.997: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:deployment-8625,SelfLink:/apis/apps/v1/namespaces/deployment-8625/replicasets/test-rolling-update-controller,UID:af96dbb7-198f-4ecb-9067-6d30ece03278,ResourceVersion:7911340,Generation:2,CreationTimestamp:2020-04-28 14:21:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 7a1e10ec-161d-4796-a990-2a7883ef8705 0xc00291102f 0xc002911040}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Apr 28 14:21:29.001: INFO: Pod "test-rolling-update-deployment-79f6b9d75c-84gcj" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c-84gcj,GenerateName:test-rolling-update-deployment-79f6b9d75c-,Namespace:deployment-8625,SelfLink:/api/v1/namespaces/deployment-8625/pods/test-rolling-update-deployment-79f6b9d75c-84gcj,UID:71d40082-d515-4762-a8e8-e95e8d9356ed,ResourceVersion:7911330,Generation:0,CreationTimestamp:2020-04-28 14:21:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-79f6b9d75c 6c3330bd-c587-4033-a922-3389c8f39749 0xc001e2a537 0xc001e2a538}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-4dpsn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4dpsn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-4dpsn true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001e2a5b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001e2a5d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 14:21:25 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 14:21:27 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 14:21:27 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-28 14:21:24 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.76,StartTime:2020-04-28 14:21:25 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-04-28 14:21:27 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://361b46f0abe54274d01925f78d8ea48c2a87c310a14a78629b1f85902ecfb65a}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 14:21:29.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-8625" for this suite. Apr 28 14:21:35.025: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 14:21:35.094: INFO: namespace deployment-8625 deletion completed in 6.088930342s • [SLOW TEST:15.222 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 14:21:35.095: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Apr 28 14:21:35.189: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 14:21:42.508: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-7852" for this suite. Apr 28 14:21:48.554: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 14:21:48.634: INFO: namespace init-container-7852 deletion completed in 6.122137075s • [SLOW TEST:13.539 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 14:21:48.634: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Apr 28 14:21:49.166: INFO: Pod name wrapped-volume-race-d34621a7-6d5f-4815-b83d-0a22133eac52: Found 0 pods out of 5 Apr 28 14:21:54.175: INFO: Pod name wrapped-volume-race-d34621a7-6d5f-4815-b83d-0a22133eac52: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-d34621a7-6d5f-4815-b83d-0a22133eac52 in namespace emptydir-wrapper-1031, will wait for the garbage collector to delete the pods Apr 28 14:22:08.260: INFO: Deleting ReplicationController wrapped-volume-race-d34621a7-6d5f-4815-b83d-0a22133eac52 took: 8.633449ms Apr 28 14:22:08.560: INFO: Terminating ReplicationController wrapped-volume-race-d34621a7-6d5f-4815-b83d-0a22133eac52 pods took: 300.290585ms STEP: Creating RC which spawns configmap-volume pods Apr 28 14:22:52.698: INFO: Pod name wrapped-volume-race-2dc83035-0625-4feb-953f-7a460b1de77f: Found 0 pods out of 5 Apr 28 14:22:57.707: INFO: Pod name wrapped-volume-race-2dc83035-0625-4feb-953f-7a460b1de77f: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-2dc83035-0625-4feb-953f-7a460b1de77f in namespace emptydir-wrapper-1031, will wait for the garbage collector to delete the pods Apr 28 14:23:11.790: INFO: Deleting ReplicationController wrapped-volume-race-2dc83035-0625-4feb-953f-7a460b1de77f took: 13.730339ms Apr 28 14:23:12.090: INFO: Terminating ReplicationController wrapped-volume-race-2dc83035-0625-4feb-953f-7a460b1de77f pods took: 300.264375ms STEP: Creating RC which spawns configmap-volume pods Apr 28 14:23:53.246: INFO: Pod name wrapped-volume-race-2ab3b4d8-f822-4c14-92f7-d48d74758f4d: Found 0 pods out of 5 Apr 28 14:23:58.262: INFO: Pod name wrapped-volume-race-2ab3b4d8-f822-4c14-92f7-d48d74758f4d: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-2ab3b4d8-f822-4c14-92f7-d48d74758f4d in namespace emptydir-wrapper-1031, will wait for the garbage collector to delete the pods Apr 28 14:24:12.360: INFO: Deleting ReplicationController wrapped-volume-race-2ab3b4d8-f822-4c14-92f7-d48d74758f4d took: 7.762541ms Apr 28 14:24:12.660: INFO: Terminating ReplicationController wrapped-volume-race-2ab3b4d8-f822-4c14-92f7-d48d74758f4d pods took: 300.320252ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 14:24:53.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-1031" for this suite. Apr 28 14:25:01.319: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 14:25:01.398: INFO: namespace emptydir-wrapper-1031 deletion completed in 8.122337059s • [SLOW TEST:192.764 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 14:25:01.399: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Apr 28 14:25:01.473: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Apr 28 14:25:10.504: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 14:25:10.508: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6419" for this suite. Apr 28 14:25:16.518: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 14:25:16.597: INFO: namespace pods-6419 deletion completed in 6.08693283s • [SLOW TEST:15.199 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 14:25:16.598: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 28 14:25:16.654: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 14:25:17.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-6089" for this suite. Apr 28 14:25:23.843: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 14:25:23.913: INFO: namespace custom-resource-definition-6089 deletion completed in 6.09817779s • [SLOW TEST:7.315 seconds] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35 creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 14:25:23.914: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-3858 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 28 14:25:23.970: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Apr 28 14:25:48.074: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.155:8080/dial?request=hostName&protocol=http&host=10.244.1.78&port=8080&tries=1'] Namespace:pod-network-test-3858 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 28 14:25:48.074: INFO: >>> kubeConfig: /root/.kube/config I0428 14:25:48.112249 6 log.go:172] (0xc000a858c0) (0xc001ea0820) Create stream I0428 14:25:48.112286 6 log.go:172] (0xc000a858c0) (0xc001ea0820) Stream added, broadcasting: 1 I0428 14:25:48.114936 6 log.go:172] (0xc000a858c0) Reply frame received for 1 I0428 14:25:48.114987 6 log.go:172] (0xc000a858c0) (0xc0023e6000) Create stream I0428 14:25:48.115004 6 log.go:172] (0xc000a858c0) (0xc0023e6000) Stream added, broadcasting: 3 I0428 14:25:48.115820 6 log.go:172] (0xc000a858c0) Reply frame received for 3 I0428 14:25:48.115844 6 log.go:172] (0xc000a858c0) (0xc001ea08c0) Create stream I0428 14:25:48.115852 6 log.go:172] (0xc000a858c0) (0xc001ea08c0) Stream added, broadcasting: 5 I0428 14:25:48.116613 6 log.go:172] (0xc000a858c0) Reply frame received for 5 I0428 14:25:48.194925 6 log.go:172] (0xc000a858c0) Data frame received for 3 I0428 14:25:48.194954 6 log.go:172] (0xc0023e6000) (3) Data frame handling I0428 14:25:48.194982 6 log.go:172] (0xc0023e6000) (3) Data frame sent I0428 14:25:48.195385 6 log.go:172] (0xc000a858c0) Data frame received for 5 I0428 14:25:48.195404 6 log.go:172] (0xc001ea08c0) (5) Data frame handling I0428 14:25:48.195440 6 log.go:172] (0xc000a858c0) Data frame received for 3 I0428 14:25:48.195474 6 log.go:172] (0xc0023e6000) (3) Data frame handling I0428 14:25:48.197996 6 log.go:172] (0xc000a858c0) Data frame received for 1 I0428 14:25:48.198017 6 log.go:172] (0xc001ea0820) (1) Data frame handling I0428 14:25:48.198026 6 log.go:172] (0xc001ea0820) (1) Data frame sent I0428 14:25:48.198040 6 log.go:172] (0xc000a858c0) (0xc001ea0820) Stream removed, broadcasting: 1 I0428 14:25:48.198059 6 log.go:172] (0xc000a858c0) Go away received I0428 14:25:48.198153 6 log.go:172] (0xc000a858c0) (0xc001ea0820) Stream removed, broadcasting: 1 I0428 14:25:48.198171 6 log.go:172] (0xc000a858c0) (0xc0023e6000) Stream removed, broadcasting: 3 I0428 14:25:48.198183 6 log.go:172] (0xc000a858c0) (0xc001ea08c0) Stream removed, broadcasting: 5 Apr 28 14:25:48.198: INFO: Waiting for endpoints: map[] Apr 28 14:25:48.201: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.155:8080/dial?request=hostName&protocol=http&host=10.244.2.154&port=8080&tries=1'] Namespace:pod-network-test-3858 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 28 14:25:48.201: INFO: >>> kubeConfig: /root/.kube/config I0428 14:25:48.234386 6 log.go:172] (0xc0025e6000) (0xc001fbc5a0) Create stream I0428 14:25:48.234412 6 log.go:172] (0xc0025e6000) (0xc001fbc5a0) Stream added, broadcasting: 1 I0428 14:25:48.236751 6 log.go:172] (0xc0025e6000) Reply frame received for 1 I0428 14:25:48.236795 6 log.go:172] (0xc0025e6000) (0xc001fbc640) Create stream I0428 14:25:48.236811 6 log.go:172] (0xc0025e6000) (0xc001fbc640) Stream added, broadcasting: 3 I0428 14:25:48.238342 6 log.go:172] (0xc0025e6000) Reply frame received for 3 I0428 14:25:48.238383 6 log.go:172] (0xc0025e6000) (0xc0023e6140) Create stream I0428 14:25:48.238397 6 log.go:172] (0xc0025e6000) (0xc0023e6140) Stream added, broadcasting: 5 I0428 14:25:48.239423 6 log.go:172] (0xc0025e6000) Reply frame received for 5 I0428 14:25:48.308025 6 log.go:172] (0xc0025e6000) Data frame received for 3 I0428 14:25:48.308048 6 log.go:172] (0xc001fbc640) (3) Data frame handling I0428 14:25:48.308065 6 log.go:172] (0xc001fbc640) (3) Data frame sent I0428 14:25:48.308631 6 log.go:172] (0xc0025e6000) Data frame received for 5 I0428 14:25:48.308656 6 log.go:172] (0xc0023e6140) (5) Data frame handling I0428 14:25:48.308686 6 log.go:172] (0xc0025e6000) Data frame received for 3 I0428 14:25:48.308697 6 log.go:172] (0xc001fbc640) (3) Data frame handling I0428 14:25:48.310388 6 log.go:172] (0xc0025e6000) Data frame received for 1 I0428 14:25:48.310414 6 log.go:172] (0xc001fbc5a0) (1) Data frame handling I0428 14:25:48.310442 6 log.go:172] (0xc001fbc5a0) (1) Data frame sent I0428 14:25:48.310457 6 log.go:172] (0xc0025e6000) (0xc001fbc5a0) Stream removed, broadcasting: 1 I0428 14:25:48.310546 6 log.go:172] (0xc0025e6000) Go away received I0428 14:25:48.310619 6 log.go:172] (0xc0025e6000) (0xc001fbc5a0) Stream removed, broadcasting: 1 I0428 14:25:48.310661 6 log.go:172] (0xc0025e6000) (0xc001fbc640) Stream removed, broadcasting: 3 I0428 14:25:48.310676 6 log.go:172] (0xc0025e6000) (0xc0023e6140) Stream removed, broadcasting: 5 Apr 28 14:25:48.310: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 14:25:48.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-3858" for this suite. Apr 28 14:26:10.328: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 14:26:10.401: INFO: namespace pod-network-test-3858 deletion completed in 22.086169732s • [SLOW TEST:46.487 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 14:26:10.401: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-7189 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-7189 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-7189 Apr 28 14:26:10.477: INFO: Found 0 stateful pods, waiting for 1 Apr 28 14:26:20.482: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Apr 28 14:26:20.486: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7189 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Apr 28 14:26:20.758: INFO: stderr: "I0428 14:26:20.629871 2885 log.go:172] (0xc000a0e420) (0xc0003b6820) Create stream\nI0428 14:26:20.629932 2885 log.go:172] (0xc000a0e420) (0xc0003b6820) Stream added, broadcasting: 1\nI0428 14:26:20.634260 2885 log.go:172] (0xc000a0e420) Reply frame received for 1\nI0428 14:26:20.634309 2885 log.go:172] (0xc000a0e420) (0xc0003b6000) Create stream\nI0428 14:26:20.634328 2885 log.go:172] (0xc000a0e420) (0xc0003b6000) Stream added, broadcasting: 3\nI0428 14:26:20.635223 2885 log.go:172] (0xc000a0e420) Reply frame received for 3\nI0428 14:26:20.635277 2885 log.go:172] (0xc000a0e420) (0xc0003b6140) Create stream\nI0428 14:26:20.635293 2885 log.go:172] (0xc000a0e420) (0xc0003b6140) Stream added, broadcasting: 5\nI0428 14:26:20.636073 2885 log.go:172] (0xc000a0e420) Reply frame received for 5\nI0428 14:26:20.707706 2885 log.go:172] (0xc000a0e420) Data frame received for 5\nI0428 14:26:20.707739 2885 log.go:172] (0xc0003b6140) (5) Data frame handling\nI0428 14:26:20.707757 2885 log.go:172] (0xc0003b6140) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0428 14:26:20.750645 2885 log.go:172] (0xc000a0e420) Data frame received for 3\nI0428 14:26:20.750676 2885 log.go:172] (0xc0003b6000) (3) Data frame handling\nI0428 14:26:20.750693 2885 log.go:172] (0xc0003b6000) (3) Data frame sent\nI0428 14:26:20.750817 2885 log.go:172] (0xc000a0e420) Data frame received for 5\nI0428 14:26:20.750831 2885 log.go:172] (0xc0003b6140) (5) Data frame handling\nI0428 14:26:20.750854 2885 log.go:172] (0xc000a0e420) Data frame received for 3\nI0428 14:26:20.750861 2885 log.go:172] (0xc0003b6000) (3) Data frame handling\nI0428 14:26:20.752675 2885 log.go:172] (0xc000a0e420) Data frame received for 1\nI0428 14:26:20.752688 2885 log.go:172] (0xc0003b6820) (1) Data frame handling\nI0428 14:26:20.752698 2885 log.go:172] (0xc0003b6820) (1) Data frame sent\nI0428 14:26:20.752819 2885 log.go:172] (0xc000a0e420) (0xc0003b6820) Stream removed, broadcasting: 1\nI0428 14:26:20.752853 2885 log.go:172] (0xc000a0e420) Go away received\nI0428 14:26:20.753666 2885 log.go:172] (0xc000a0e420) (0xc0003b6820) Stream removed, broadcasting: 1\nI0428 14:26:20.753694 2885 log.go:172] (0xc000a0e420) (0xc0003b6000) Stream removed, broadcasting: 3\nI0428 14:26:20.753711 2885 log.go:172] (0xc000a0e420) (0xc0003b6140) Stream removed, broadcasting: 5\n" Apr 28 14:26:20.758: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Apr 28 14:26:20.758: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Apr 28 14:26:20.787: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Apr 28 14:26:30.791: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 28 14:26:30.791: INFO: Waiting for statefulset status.replicas updated to 0 Apr 28 14:26:30.808: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999185s Apr 28 14:26:31.813: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.993903101s Apr 28 14:26:32.818: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.98870411s Apr 28 14:26:33.822: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.984406625s Apr 28 14:26:34.827: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.979854662s Apr 28 14:26:35.832: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.975277775s Apr 28 14:26:36.836: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.970097542s Apr 28 14:26:37.842: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.965524621s Apr 28 14:26:38.846: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.960105804s Apr 28 14:26:39.852: INFO: Verifying statefulset ss doesn't scale past 1 for another 955.696212ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-7189 Apr 28 14:26:40.856: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7189 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 28 14:26:41.105: INFO: stderr: "I0428 14:26:40.989791 2906 log.go:172] (0xc00098a420) (0xc0004fc820) Create stream\nI0428 14:26:40.989847 2906 log.go:172] (0xc00098a420) (0xc0004fc820) Stream added, broadcasting: 1\nI0428 14:26:40.993923 2906 log.go:172] (0xc00098a420) Reply frame received for 1\nI0428 14:26:40.993985 2906 log.go:172] (0xc00098a420) (0xc0005de280) Create stream\nI0428 14:26:40.994004 2906 log.go:172] (0xc00098a420) (0xc0005de280) Stream added, broadcasting: 3\nI0428 14:26:40.994916 2906 log.go:172] (0xc00098a420) Reply frame received for 3\nI0428 14:26:40.994947 2906 log.go:172] (0xc00098a420) (0xc0005de320) Create stream\nI0428 14:26:40.994958 2906 log.go:172] (0xc00098a420) (0xc0005de320) Stream added, broadcasting: 5\nI0428 14:26:40.995809 2906 log.go:172] (0xc00098a420) Reply frame received for 5\nI0428 14:26:41.093566 2906 log.go:172] (0xc00098a420) Data frame received for 5\nI0428 14:26:41.093610 2906 log.go:172] (0xc0005de320) (5) Data frame handling\nI0428 14:26:41.093631 2906 log.go:172] (0xc0005de320) (5) Data frame sent\nI0428 14:26:41.093644 2906 log.go:172] (0xc00098a420) Data frame received for 5\nI0428 14:26:41.093657 2906 log.go:172] (0xc0005de320) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0428 14:26:41.098831 2906 log.go:172] (0xc00098a420) Data frame received for 3\nI0428 14:26:41.098845 2906 log.go:172] (0xc0005de280) (3) Data frame handling\nI0428 14:26:41.098856 2906 log.go:172] (0xc0005de280) (3) Data frame sent\nI0428 14:26:41.099561 2906 log.go:172] (0xc00098a420) Data frame received for 3\nI0428 14:26:41.099582 2906 log.go:172] (0xc0005de280) (3) Data frame handling\nI0428 14:26:41.100826 2906 log.go:172] (0xc00098a420) Data frame received for 1\nI0428 14:26:41.100839 2906 log.go:172] (0xc0004fc820) (1) Data frame handling\nI0428 14:26:41.100848 2906 log.go:172] (0xc0004fc820) (1) Data frame sent\nI0428 14:26:41.100860 2906 log.go:172] (0xc00098a420) (0xc0004fc820) Stream removed, broadcasting: 1\nI0428 14:26:41.100874 2906 log.go:172] (0xc00098a420) Go away received\nI0428 14:26:41.101281 2906 log.go:172] (0xc00098a420) (0xc0004fc820) Stream removed, broadcasting: 1\nI0428 14:26:41.101294 2906 log.go:172] (0xc00098a420) (0xc0005de280) Stream removed, broadcasting: 3\nI0428 14:26:41.101299 2906 log.go:172] (0xc00098a420) (0xc0005de320) Stream removed, broadcasting: 5\n" Apr 28 14:26:41.105: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Apr 28 14:26:41.105: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Apr 28 14:26:41.109: INFO: Found 1 stateful pods, waiting for 3 Apr 28 14:26:51.127: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Apr 28 14:26:51.127: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Apr 28 14:26:51.127: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Apr 28 14:26:51.131: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7189 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Apr 28 14:26:51.376: INFO: stderr: "I0428 14:26:51.269540 2926 log.go:172] (0xc000a84420) (0xc0009606e0) Create stream\nI0428 14:26:51.269591 2926 log.go:172] (0xc000a84420) (0xc0009606e0) Stream added, broadcasting: 1\nI0428 14:26:51.271959 2926 log.go:172] (0xc000a84420) Reply frame received for 1\nI0428 14:26:51.272000 2926 log.go:172] (0xc000a84420) (0xc00079e320) Create stream\nI0428 14:26:51.272014 2926 log.go:172] (0xc000a84420) (0xc00079e320) Stream added, broadcasting: 3\nI0428 14:26:51.273065 2926 log.go:172] (0xc000a84420) Reply frame received for 3\nI0428 14:26:51.273104 2926 log.go:172] (0xc000a84420) (0xc00079e3c0) Create stream\nI0428 14:26:51.273271 2926 log.go:172] (0xc000a84420) (0xc00079e3c0) Stream added, broadcasting: 5\nI0428 14:26:51.274361 2926 log.go:172] (0xc000a84420) Reply frame received for 5\nI0428 14:26:51.369061 2926 log.go:172] (0xc000a84420) Data frame received for 5\nI0428 14:26:51.369089 2926 log.go:172] (0xc00079e3c0) (5) Data frame handling\nI0428 14:26:51.369101 2926 log.go:172] (0xc00079e3c0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0428 14:26:51.369238 2926 log.go:172] (0xc000a84420) Data frame received for 5\nI0428 14:26:51.369267 2926 log.go:172] (0xc000a84420) Data frame received for 3\nI0428 14:26:51.369293 2926 log.go:172] (0xc00079e320) (3) Data frame handling\nI0428 14:26:51.369306 2926 log.go:172] (0xc00079e320) (3) Data frame sent\nI0428 14:26:51.369317 2926 log.go:172] (0xc000a84420) Data frame received for 3\nI0428 14:26:51.369326 2926 log.go:172] (0xc00079e320) (3) Data frame handling\nI0428 14:26:51.369359 2926 log.go:172] (0xc00079e3c0) (5) Data frame handling\nI0428 14:26:51.370544 2926 log.go:172] (0xc000a84420) Data frame received for 1\nI0428 14:26:51.370579 2926 log.go:172] (0xc0009606e0) (1) Data frame handling\nI0428 14:26:51.370600 2926 log.go:172] (0xc0009606e0) (1) Data frame sent\nI0428 14:26:51.370630 2926 log.go:172] (0xc000a84420) (0xc0009606e0) Stream removed, broadcasting: 1\nI0428 14:26:51.370668 2926 log.go:172] (0xc000a84420) Go away received\nI0428 14:26:51.371089 2926 log.go:172] (0xc000a84420) (0xc0009606e0) Stream removed, broadcasting: 1\nI0428 14:26:51.371115 2926 log.go:172] (0xc000a84420) (0xc00079e320) Stream removed, broadcasting: 3\nI0428 14:26:51.371129 2926 log.go:172] (0xc000a84420) (0xc00079e3c0) Stream removed, broadcasting: 5\n" Apr 28 14:26:51.376: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Apr 28 14:26:51.376: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Apr 28 14:26:51.376: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7189 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Apr 28 14:26:51.602: INFO: stderr: "I0428 14:26:51.499331 2948 log.go:172] (0xc00013a8f0) (0xc000588aa0) Create stream\nI0428 14:26:51.499404 2948 log.go:172] (0xc00013a8f0) (0xc000588aa0) Stream added, broadcasting: 1\nI0428 14:26:51.501976 2948 log.go:172] (0xc00013a8f0) Reply frame received for 1\nI0428 14:26:51.502039 2948 log.go:172] (0xc00013a8f0) (0xc00083a000) Create stream\nI0428 14:26:51.502066 2948 log.go:172] (0xc00013a8f0) (0xc00083a000) Stream added, broadcasting: 3\nI0428 14:26:51.503152 2948 log.go:172] (0xc00013a8f0) Reply frame received for 3\nI0428 14:26:51.503197 2948 log.go:172] (0xc00013a8f0) (0xc000588b40) Create stream\nI0428 14:26:51.503212 2948 log.go:172] (0xc00013a8f0) (0xc000588b40) Stream added, broadcasting: 5\nI0428 14:26:51.505339 2948 log.go:172] (0xc00013a8f0) Reply frame received for 5\nI0428 14:26:51.569665 2948 log.go:172] (0xc00013a8f0) Data frame received for 5\nI0428 14:26:51.569694 2948 log.go:172] (0xc000588b40) (5) Data frame handling\nI0428 14:26:51.569714 2948 log.go:172] (0xc000588b40) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0428 14:26:51.594393 2948 log.go:172] (0xc00013a8f0) Data frame received for 3\nI0428 14:26:51.594418 2948 log.go:172] (0xc00083a000) (3) Data frame handling\nI0428 14:26:51.594432 2948 log.go:172] (0xc00083a000) (3) Data frame sent\nI0428 14:26:51.594439 2948 log.go:172] (0xc00013a8f0) Data frame received for 3\nI0428 14:26:51.594446 2948 log.go:172] (0xc00083a000) (3) Data frame handling\nI0428 14:26:51.594509 2948 log.go:172] (0xc00013a8f0) Data frame received for 5\nI0428 14:26:51.594542 2948 log.go:172] (0xc000588b40) (5) Data frame handling\nI0428 14:26:51.596703 2948 log.go:172] (0xc00013a8f0) Data frame received for 1\nI0428 14:26:51.596730 2948 log.go:172] (0xc000588aa0) (1) Data frame handling\nI0428 14:26:51.596746 2948 log.go:172] (0xc000588aa0) (1) Data frame sent\nI0428 14:26:51.596771 2948 log.go:172] (0xc00013a8f0) (0xc000588aa0) Stream removed, broadcasting: 1\nI0428 14:26:51.596824 2948 log.go:172] (0xc00013a8f0) Go away received\nI0428 14:26:51.597395 2948 log.go:172] (0xc00013a8f0) (0xc000588aa0) Stream removed, broadcasting: 1\nI0428 14:26:51.597425 2948 log.go:172] (0xc00013a8f0) (0xc00083a000) Stream removed, broadcasting: 3\nI0428 14:26:51.597438 2948 log.go:172] (0xc00013a8f0) (0xc000588b40) Stream removed, broadcasting: 5\n" Apr 28 14:26:51.602: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Apr 28 14:26:51.602: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Apr 28 14:26:51.602: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7189 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Apr 28 14:26:51.883: INFO: stderr: "I0428 14:26:51.738647 2968 log.go:172] (0xc0009ec630) (0xc000604b40) Create stream\nI0428 14:26:51.738711 2968 log.go:172] (0xc0009ec630) (0xc000604b40) Stream added, broadcasting: 1\nI0428 14:26:51.741466 2968 log.go:172] (0xc0009ec630) Reply frame received for 1\nI0428 14:26:51.741732 2968 log.go:172] (0xc0009ec630) (0xc000a28000) Create stream\nI0428 14:26:51.741851 2968 log.go:172] (0xc0009ec630) (0xc000a28000) Stream added, broadcasting: 3\nI0428 14:26:51.743696 2968 log.go:172] (0xc0009ec630) Reply frame received for 3\nI0428 14:26:51.743756 2968 log.go:172] (0xc0009ec630) (0xc000a280a0) Create stream\nI0428 14:26:51.743778 2968 log.go:172] (0xc0009ec630) (0xc000a280a0) Stream added, broadcasting: 5\nI0428 14:26:51.744925 2968 log.go:172] (0xc0009ec630) Reply frame received for 5\nI0428 14:26:51.817265 2968 log.go:172] (0xc0009ec630) Data frame received for 5\nI0428 14:26:51.817291 2968 log.go:172] (0xc000a280a0) (5) Data frame handling\nI0428 14:26:51.817300 2968 log.go:172] (0xc000a280a0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0428 14:26:51.875223 2968 log.go:172] (0xc0009ec630) Data frame received for 3\nI0428 14:26:51.875286 2968 log.go:172] (0xc000a28000) (3) Data frame handling\nI0428 14:26:51.875315 2968 log.go:172] (0xc000a28000) (3) Data frame sent\nI0428 14:26:51.875330 2968 log.go:172] (0xc0009ec630) Data frame received for 3\nI0428 14:26:51.875342 2968 log.go:172] (0xc000a28000) (3) Data frame handling\nI0428 14:26:51.875392 2968 log.go:172] (0xc0009ec630) Data frame received for 5\nI0428 14:26:51.875436 2968 log.go:172] (0xc000a280a0) (5) Data frame handling\nI0428 14:26:51.877401 2968 log.go:172] (0xc0009ec630) Data frame received for 1\nI0428 14:26:51.877428 2968 log.go:172] (0xc000604b40) (1) Data frame handling\nI0428 14:26:51.877449 2968 log.go:172] (0xc000604b40) (1) Data frame sent\nI0428 14:26:51.877622 2968 log.go:172] (0xc0009ec630) (0xc000604b40) Stream removed, broadcasting: 1\nI0428 14:26:51.877690 2968 log.go:172] (0xc0009ec630) Go away received\nI0428 14:26:51.878047 2968 log.go:172] (0xc0009ec630) (0xc000604b40) Stream removed, broadcasting: 1\nI0428 14:26:51.878071 2968 log.go:172] (0xc0009ec630) (0xc000a28000) Stream removed, broadcasting: 3\nI0428 14:26:51.878082 2968 log.go:172] (0xc0009ec630) (0xc000a280a0) Stream removed, broadcasting: 5\n" Apr 28 14:26:51.883: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Apr 28 14:26:51.883: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Apr 28 14:26:51.883: INFO: Waiting for statefulset status.replicas updated to 0 Apr 28 14:26:51.887: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Apr 28 14:27:01.894: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 28 14:27:01.894: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Apr 28 14:27:01.894: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Apr 28 14:27:01.911: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999507s Apr 28 14:27:02.916: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.988663424s Apr 28 14:27:03.922: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.983168803s Apr 28 14:27:04.926: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.977537286s Apr 28 14:27:05.931: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.973300294s Apr 28 14:27:06.937: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.967943693s Apr 28 14:27:07.944: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.959945369s Apr 28 14:27:08.950: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.955391471s Apr 28 14:27:09.955: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.949506041s Apr 28 14:27:10.961: INFO: Verifying statefulset ss doesn't scale past 3 for another 944.273408ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-7189 Apr 28 14:27:11.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7189 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 28 14:27:12.188: INFO: stderr: "I0428 14:27:12.094880 2988 log.go:172] (0xc000a9e630) (0xc0004d0960) Create stream\nI0428 14:27:12.094945 2988 log.go:172] (0xc000a9e630) (0xc0004d0960) Stream added, broadcasting: 1\nI0428 14:27:12.098767 2988 log.go:172] (0xc000a9e630) Reply frame received for 1\nI0428 14:27:12.098838 2988 log.go:172] (0xc000a9e630) (0xc0004d0000) Create stream\nI0428 14:27:12.098862 2988 log.go:172] (0xc000a9e630) (0xc0004d0000) Stream added, broadcasting: 3\nI0428 14:27:12.099978 2988 log.go:172] (0xc000a9e630) Reply frame received for 3\nI0428 14:27:12.100000 2988 log.go:172] (0xc000a9e630) (0xc0004d00a0) Create stream\nI0428 14:27:12.100007 2988 log.go:172] (0xc000a9e630) (0xc0004d00a0) Stream added, broadcasting: 5\nI0428 14:27:12.101450 2988 log.go:172] (0xc000a9e630) Reply frame received for 5\nI0428 14:27:12.181264 2988 log.go:172] (0xc000a9e630) Data frame received for 5\nI0428 14:27:12.181302 2988 log.go:172] (0xc0004d00a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0428 14:27:12.181326 2988 log.go:172] (0xc000a9e630) Data frame received for 3\nI0428 14:27:12.181380 2988 log.go:172] (0xc0004d0000) (3) Data frame handling\nI0428 14:27:12.181399 2988 log.go:172] (0xc0004d0000) (3) Data frame sent\nI0428 14:27:12.181417 2988 log.go:172] (0xc000a9e630) Data frame received for 3\nI0428 14:27:12.181431 2988 log.go:172] (0xc0004d0000) (3) Data frame handling\nI0428 14:27:12.181445 2988 log.go:172] (0xc0004d00a0) (5) Data frame sent\nI0428 14:27:12.181458 2988 log.go:172] (0xc000a9e630) Data frame received for 5\nI0428 14:27:12.181463 2988 log.go:172] (0xc0004d00a0) (5) Data frame handling\nI0428 14:27:12.182755 2988 log.go:172] (0xc000a9e630) Data frame received for 1\nI0428 14:27:12.182771 2988 log.go:172] (0xc0004d0960) (1) Data frame handling\nI0428 14:27:12.182792 2988 log.go:172] (0xc0004d0960) (1) Data frame sent\nI0428 14:27:12.182808 2988 log.go:172] (0xc000a9e630) (0xc0004d0960) Stream removed, broadcasting: 1\nI0428 14:27:12.182861 2988 log.go:172] (0xc000a9e630) Go away received\nI0428 14:27:12.183103 2988 log.go:172] (0xc000a9e630) (0xc0004d0960) Stream removed, broadcasting: 1\nI0428 14:27:12.183120 2988 log.go:172] (0xc000a9e630) (0xc0004d0000) Stream removed, broadcasting: 3\nI0428 14:27:12.183131 2988 log.go:172] (0xc000a9e630) (0xc0004d00a0) Stream removed, broadcasting: 5\n" Apr 28 14:27:12.188: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Apr 28 14:27:12.188: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Apr 28 14:27:12.188: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7189 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 28 14:27:12.415: INFO: stderr: "I0428 14:27:12.341220 3009 log.go:172] (0xc0008aa0b0) (0xc000587720) Create stream\nI0428 14:27:12.341270 3009 log.go:172] (0xc0008aa0b0) (0xc000587720) Stream added, broadcasting: 1\nI0428 14:27:12.343383 3009 log.go:172] (0xc0008aa0b0) Reply frame received for 1\nI0428 14:27:12.343428 3009 log.go:172] (0xc0008aa0b0) (0xc00079e000) Create stream\nI0428 14:27:12.343440 3009 log.go:172] (0xc0008aa0b0) (0xc00079e000) Stream added, broadcasting: 3\nI0428 14:27:12.344319 3009 log.go:172] (0xc0008aa0b0) Reply frame received for 3\nI0428 14:27:12.344360 3009 log.go:172] (0xc0008aa0b0) (0xc0005877c0) Create stream\nI0428 14:27:12.344378 3009 log.go:172] (0xc0008aa0b0) (0xc0005877c0) Stream added, broadcasting: 5\nI0428 14:27:12.345367 3009 log.go:172] (0xc0008aa0b0) Reply frame received for 5\nI0428 14:27:12.409106 3009 log.go:172] (0xc0008aa0b0) Data frame received for 5\nI0428 14:27:12.409373 3009 log.go:172] (0xc0005877c0) (5) Data frame handling\nI0428 14:27:12.409405 3009 log.go:172] (0xc0005877c0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0428 14:27:12.409435 3009 log.go:172] (0xc0008aa0b0) Data frame received for 3\nI0428 14:27:12.409462 3009 log.go:172] (0xc00079e000) (3) Data frame handling\nI0428 14:27:12.409484 3009 log.go:172] (0xc00079e000) (3) Data frame sent\nI0428 14:27:12.409500 3009 log.go:172] (0xc0008aa0b0) Data frame received for 3\nI0428 14:27:12.409509 3009 log.go:172] (0xc00079e000) (3) Data frame handling\nI0428 14:27:12.409553 3009 log.go:172] (0xc0008aa0b0) Data frame received for 5\nI0428 14:27:12.409600 3009 log.go:172] (0xc0005877c0) (5) Data frame handling\nI0428 14:27:12.411301 3009 log.go:172] (0xc0008aa0b0) Data frame received for 1\nI0428 14:27:12.411317 3009 log.go:172] (0xc000587720) (1) Data frame handling\nI0428 14:27:12.411324 3009 log.go:172] (0xc000587720) (1) Data frame sent\nI0428 14:27:12.411333 3009 log.go:172] (0xc0008aa0b0) (0xc000587720) Stream removed, broadcasting: 1\nI0428 14:27:12.411383 3009 log.go:172] (0xc0008aa0b0) Go away received\nI0428 14:27:12.411556 3009 log.go:172] (0xc0008aa0b0) (0xc000587720) Stream removed, broadcasting: 1\nI0428 14:27:12.411569 3009 log.go:172] (0xc0008aa0b0) (0xc00079e000) Stream removed, broadcasting: 3\nI0428 14:27:12.411574 3009 log.go:172] (0xc0008aa0b0) (0xc0005877c0) Stream removed, broadcasting: 5\n" Apr 28 14:27:12.415: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Apr 28 14:27:12.415: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Apr 28 14:27:12.415: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7189 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 28 14:27:12.624: INFO: rc: 1 Apr 28 14:27:12.624: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7189 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] I0428 14:27:12.548479 3025 log.go:172] (0xc000116dc0) (0xc0006f26e0) Create stream I0428 14:27:12.548558 3025 log.go:172] (0xc000116dc0) (0xc0006f26e0) Stream added, broadcasting: 1 I0428 14:27:12.551922 3025 log.go:172] (0xc000116dc0) Reply frame received for 1 I0428 14:27:12.551958 3025 log.go:172] (0xc000116dc0) (0xc00030e1e0) Create stream I0428 14:27:12.551969 3025 log.go:172] (0xc000116dc0) (0xc00030e1e0) Stream added, broadcasting: 3 I0428 14:27:12.553041 3025 log.go:172] (0xc000116dc0) Reply frame received for 3 I0428 14:27:12.553097 3025 log.go:172] (0xc000116dc0) (0xc000842000) Create stream I0428 14:27:12.553252 3025 log.go:172] (0xc000116dc0) (0xc000842000) Stream added, broadcasting: 5 I0428 14:27:12.554452 3025 log.go:172] (0xc000116dc0) Reply frame received for 5 I0428 14:27:12.617770 3025 log.go:172] (0xc000116dc0) Data frame received for 1 I0428 14:27:12.617801 3025 log.go:172] (0xc000116dc0) (0xc000842000) Stream removed, broadcasting: 5 I0428 14:27:12.617827 3025 log.go:172] (0xc0006f26e0) (1) Data frame handling I0428 14:27:12.617897 3025 log.go:172] (0xc000116dc0) (0xc00030e1e0) Stream removed, broadcasting: 3 I0428 14:27:12.617983 3025 log.go:172] (0xc0006f26e0) (1) Data frame sent I0428 14:27:12.618029 3025 log.go:172] (0xc000116dc0) (0xc0006f26e0) Stream removed, broadcasting: 1 I0428 14:27:12.618060 3025 log.go:172] (0xc000116dc0) Go away received I0428 14:27:12.618490 3025 log.go:172] (0xc000116dc0) (0xc0006f26e0) Stream removed, broadcasting: 1 I0428 14:27:12.618511 3025 log.go:172] (0xc000116dc0) (0xc00030e1e0) Stream removed, broadcasting: 3 I0428 14:27:12.618520 3025 log.go:172] (0xc000116dc0) (0xc000842000) Stream removed, broadcasting: 5 error: Internal error occurred: error executing command in container: failed to exec in container: failed to start exec "5d7a78ddc9d5eecf657f590bdefb643fb0e7c6970bf15cc29736854871b6d892": OCI runtime exec failed: exec failed: container_linux.go:346: starting container process caused "process_linux.go:101: executing setns process caused \"exit status 1\"": unknown [] 0xc00281b3e0 exit status 1 true [0xc001f5c590 0xc001f5c5a8 0xc001f5c5c0] [0xc001f5c590 0xc001f5c5a8 0xc001f5c5c0] [0xc001f5c5a0 0xc001f5c5b8] [0xba70e0 0xba70e0] 0xc00201f9e0 }: Command stdout: stderr: I0428 14:27:12.548479 3025 log.go:172] (0xc000116dc0) (0xc0006f26e0) Create stream I0428 14:27:12.548558 3025 log.go:172] (0xc000116dc0) (0xc0006f26e0) Stream added, broadcasting: 1 I0428 14:27:12.551922 3025 log.go:172] (0xc000116dc0) Reply frame received for 1 I0428 14:27:12.551958 3025 log.go:172] (0xc000116dc0) (0xc00030e1e0) Create stream I0428 14:27:12.551969 3025 log.go:172] (0xc000116dc0) (0xc00030e1e0) Stream added, broadcasting: 3 I0428 14:27:12.553041 3025 log.go:172] (0xc000116dc0) Reply frame received for 3 I0428 14:27:12.553097 3025 log.go:172] (0xc000116dc0) (0xc000842000) Create stream I0428 14:27:12.553252 3025 log.go:172] (0xc000116dc0) (0xc000842000) Stream added, broadcasting: 5 I0428 14:27:12.554452 3025 log.go:172] (0xc000116dc0) Reply frame received for 5 I0428 14:27:12.617770 3025 log.go:172] (0xc000116dc0) Data frame received for 1 I0428 14:27:12.617801 3025 log.go:172] (0xc000116dc0) (0xc000842000) Stream removed, broadcasting: 5 I0428 14:27:12.617827 3025 log.go:172] (0xc0006f26e0) (1) Data frame handling I0428 14:27:12.617897 3025 log.go:172] (0xc000116dc0) (0xc00030e1e0) Stream removed, broadcasting: 3 I0428 14:27:12.617983 3025 log.go:172] (0xc0006f26e0) (1) Data frame sent I0428 14:27:12.618029 3025 log.go:172] (0xc000116dc0) (0xc0006f26e0) Stream removed, broadcasting: 1 I0428 14:27:12.618060 3025 log.go:172] (0xc000116dc0) Go away received I0428 14:27:12.618490 3025 log.go:172] (0xc000116dc0) (0xc0006f26e0) Stream removed, broadcasting: 1 I0428 14:27:12.618511 3025 log.go:172] (0xc000116dc0) (0xc00030e1e0) Stream removed, broadcasting: 3 I0428 14:27:12.618520 3025 log.go:172] (0xc000116dc0) (0xc000842000) Stream removed, broadcasting: 5 error: Internal error occurred: error executing command in container: failed to exec in container: failed to start exec "5d7a78ddc9d5eecf657f590bdefb643fb0e7c6970bf15cc29736854871b6d892": OCI runtime exec failed: exec failed: container_linux.go:346: starting container process caused "process_linux.go:101: executing setns process caused \"exit status 1\"": unknown error: exit status 1 Apr 28 14:27:22.624: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7189 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 28 14:27:22.788: INFO: rc: 1 Apr 28 14:27:22.789: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7189 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc00281b4a0 exit status 1 true [0xc001f5c5c8 0xc001f5c5e0 0xc001f5c5f8] [0xc001f5c5c8 0xc001f5c5e0 0xc001f5c5f8] [0xc001f5c5d8 0xc001f5c5f0] [0xba70e0 0xba70e0] 0xc001dd19e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 28 14:27:32.789: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7189 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 28 14:27:32.893: INFO: rc: 1 Apr 28 14:27:32.894: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7189 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002c7e090 exit status 1 true [0xc0000e8060 0xc0000e9598 0xc0000e9920] [0xc0000e8060 0xc0000e9598 0xc0000e9920] [0xc0000e9118 0xc0000e98f0] [0xba70e0 0xba70e0] 0xc00201efc0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 28 14:27:42.894: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7189 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 28 14:27:42.995: INFO: rc: 1 Apr 28 14:27:42.995: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7189 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002c7e150 exit status 1 true [0xc0000e9b78 0xc0012be038 0xc0012be388] [0xc0000e9b78 0xc0012be038 0xc0012be388] [0xc0000e9fb8 0xc0012be350] [0xba70e0 0xba70e0] 0xc0019e7620 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 28 14:27:52.996: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7189 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 28 14:27:53.118: INFO: rc: 1 Apr 28 14:27:53.118: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7189 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002510090 exit status 1 true [0xc002314000 0xc002314048 0xc002314060] [0xc002314000 0xc002314048 0xc002314060] [0xc002314040 0xc002314058] [0xba70e0 0xba70e0] 0xc001cbec60 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 28 14:28:03.119: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7189 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 28 14:28:03.226: INFO: rc: 1 Apr 28 14:28:03.226: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7189 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002490090 exit status 1 true [0xc001f5c000 0xc001f5c018 0xc001f5c030] [0xc001f5c000 0xc001f5c018 0xc001f5c030] [0xc001f5c010 0xc001f5c028] [0xba70e0 0xba70e0] 0xc0021828a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 28 14:28:13.226: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7189 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 28 14:28:13.321: INFO: rc: 1 Apr 28 14:28:13.321: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7189 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002c7e2a0 exit status 1 true [0xc0012be480 0xc0012be740 0xc0012be9f8] [0xc0012be480 0xc0012be740 0xc0012be9f8] [0xc0012be6b0 0xc0012be920] [0xba70e0 0xba70e0] 0xc0024766c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 28 14:28:23.321: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7189 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 28 14:28:23.443: INFO: rc: 1 Apr 28 14:28:23.443: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7189 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002490150 exit status 1 true [0xc001f5c038 0xc001f5c050 0xc001f5c068] [0xc001f5c038 0xc001f5c050 0xc001f5c068] [0xc001f5c048 0xc001f5c060] [0xba70e0 0xba70e0] 0xc0021833e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 28 14:28:33.443: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7189 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 28 14:28:33.559: INFO: rc: 1 Apr 28 14:28:33.559: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7189 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002510180 exit status 1 true [0xc002314068 0xc002314088 0xc0023140a0] [0xc002314068 0xc002314088 0xc0023140a0] [0xc002314078 0xc002314098] [0xba70e0 0xba70e0] 0xc001cbf9e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 28 14:28:43.559: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7189 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 28 14:28:43.672: INFO: rc: 1 Apr 28 14:28:43.672: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7189 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002c7e3c0 exit status 1 true [0xc0012beac8 0xc0012bee10 0xc0012bf030] [0xc0012beac8 0xc0012bee10 0xc0012bf030] [0xc0012becc0 0xc0012bee70] [0xba70e0 0xba70e0] 0xc002477140 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 28 14:28:53.672: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7189 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 28 14:28:53.776: INFO: rc: 1 Apr 28 14:28:53.776: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7189 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002510240 exit status 1 true [0xc0023140b0 0xc0023140e0 0xc002314108] [0xc0023140b0 0xc0023140e0 0xc002314108] [0xc0023140d0 0xc002314100] [0xba70e0 0xba70e0] 0xc0023127e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 28 14:29:03.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7189 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 28 14:29:03.873: INFO: rc: 1 Apr 28 14:29:03.873: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7189 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002c7e4b0 exit status 1 true [0xc0012bf180 0xc0012bf4e8 0xc0012bf6a0] [0xc0012bf180 0xc0012bf4e8 0xc0012bf6a0] [0xc0012bf438 0xc0012bf518] [0xba70e0 0xba70e0] 0xc002477ec0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 28 14:29:13.873: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7189 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 28 14:29:13.969: INFO: rc: 1 Apr 28 14:29:13.969: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7189 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002510300 exit status 1 true [0xc002314110 0xc002314128 0xc002314148] [0xc002314110 0xc002314128 0xc002314148] [0xc002314120 0xc002314138] [0xba70e0 0xba70e0] 0xc002313920 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 28 14:29:23.969: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7189 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 28 14:29:24.068: INFO: rc: 1 Apr 28 14:29:24.068: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7189 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0025103c0 exit status 1 true [0xc002314150 0xc002314170 0xc0023141a0] [0xc002314150 0xc002314170 0xc0023141a0] [0xc002314160 0xc002314190] [0xba70e0 0xba70e0] 0xc001f485a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 28 14:29:34.068: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7189 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 28 14:29:34.168: INFO: rc: 1 Apr 28 14:29:34.168: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7189 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc003530090 exit status 1 true [0xc0000e84b8 0xc0000e9740 0xc0000e9b78] [0xc0000e84b8 0xc0000e9740 0xc0000e9b78] [0xc0000e9598 0xc0000e9920] [0xba70e0 0xba70e0] 0xc002312480 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 28 14:29:44.168: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7189 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 28 14:29:44.278: INFO: rc: 1 Apr 28 14:29:44.279: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7189 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc003530150 exit status 1 true [0xc0000e9bd8 0xc002314000 0xc002314048] [0xc0000e9bd8 0xc002314000 0xc002314048] [0xc000010010 0xc002314040] [0xba70e0 0xba70e0] 0xc002313620 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 28 14:29:54.279: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7189 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 28 14:29:56.905: INFO: rc: 1 Apr 28 14:29:56.905: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7189 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc003530210 exit status 1 true [0xc002314050 0xc002314068 0xc002314088] [0xc002314050 0xc002314068 0xc002314088] [0xc002314060 0xc002314078] [0xba70e0 0xba70e0] 0xc0024764e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 28 14:30:06.905: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7189 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 28 14:30:07.020: INFO: rc: 1 Apr 28 14:30:07.021: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7189 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0035302d0 exit status 1 true [0xc002314090 0xc0023140b0 0xc0023140e0] [0xc002314090 0xc0023140b0 0xc0023140e0] [0xc0023140a0 0xc0023140d0] [0xba70e0 0xba70e0] 0xc002476ea0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 28 14:30:17.021: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7189 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 28 14:30:17.107: INFO: rc: 1 Apr 28 14:30:17.107: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7189 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002c7e120 exit status 1 true [0xc001f5c000 0xc001f5c018 0xc001f5c030] [0xc001f5c000 0xc001f5c018 0xc001f5c030] [0xc001f5c010 0xc001f5c028] [0xba70e0 0xba70e0] 0xc001cbec60 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 28 14:30:27.108: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7189 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 28 14:30:27.200: INFO: rc: 1 Apr 28 14:30:27.200: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7189 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002510120 exit status 1 true [0xc0012be038 0xc0012be388 0xc0012be6b0] [0xc0012be038 0xc0012be388 0xc0012be6b0] [0xc0012be350 0xc0012be5e0] [0xba70e0 0xba70e0] 0xc00201e000 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 28 14:30:37.201: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7189 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 28 14:30:37.295: INFO: rc: 1 Apr 28 14:30:37.295: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7189 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0024900f0 exit status 1 true [0xc0028c8000 0xc0028c8018 0xc0028c8030] [0xc0028c8000 0xc0028c8018 0xc0028c8030] [0xc0028c8010 0xc0028c8028] [0xba70e0 0xba70e0] 0xc001f495c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 28 14:30:47.295: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7189 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 28 14:30:47.397: INFO: rc: 1 Apr 28 14:30:47.397: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7189 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc003530390 exit status 1 true [0xc0023140f0 0xc002314110 0xc002314128] [0xc0023140f0 0xc002314110 0xc002314128] [0xc002314108 0xc002314120] [0xba70e0 0xba70e0] 0xc002477920 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 28 14:30:57.397: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7189 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 28 14:30:57.518: INFO: rc: 1 Apr 28 14:30:57.518: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7189 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc003530480 exit status 1 true [0xc002314130 0xc002314150 0xc002314170] [0xc002314130 0xc002314150 0xc002314170] [0xc002314148 0xc002314160] [0xba70e0 0xba70e0] 0xc0021827e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 28 14:31:07.518: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7189 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 28 14:31:07.613: INFO: rc: 1 Apr 28 14:31:07.613: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7189 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002c7e240 exit status 1 true [0xc001f5c038 0xc001f5c050 0xc001f5c068] [0xc001f5c038 0xc001f5c050 0xc001f5c068] [0xc001f5c048 0xc001f5c060] [0xba70e0 0xba70e0] 0xc001cbf9e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 28 14:31:17.614: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7189 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 28 14:31:17.711: INFO: rc: 1 Apr 28 14:31:17.711: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7189 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002c7e330 exit status 1 true [0xc001f5c070 0xc001f5c088 0xc001f5c0a0] [0xc001f5c070 0xc001f5c088 0xc001f5c0a0] [0xc001f5c080 0xc001f5c098] [0xba70e0 0xba70e0] 0xc002346540 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 28 14:31:27.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7189 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 28 14:31:27.814: INFO: rc: 1 Apr 28 14:31:27.814: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7189 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002c7e450 exit status 1 true [0xc001f5c0a8 0xc001f5c0c0 0xc001f5c0d8] [0xc001f5c0a8 0xc001f5c0c0 0xc001f5c0d8] [0xc001f5c0b8 0xc001f5c0d0] [0xba70e0 0xba70e0] 0xc002346ae0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 28 14:31:37.814: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7189 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 28 14:31:37.919: INFO: rc: 1 Apr 28 14:31:37.919: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7189 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002c7e090 exit status 1 true [0xc0000e8060 0xc0000e9598 0xc0000e9920] [0xc0000e8060 0xc0000e9598 0xc0000e9920] [0xc0000e9118 0xc0000e98f0] [0xba70e0 0xba70e0] 0xc001cbe000 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 28 14:31:47.919: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7189 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 28 14:31:48.018: INFO: rc: 1 Apr 28 14:31:48.018: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7189 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002490090 exit status 1 true [0xc001f5c000 0xc001f5c018 0xc001f5c030] [0xc001f5c000 0xc001f5c018 0xc001f5c030] [0xc001f5c010 0xc001f5c028] [0xba70e0 0xba70e0] 0xc0024768a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 28 14:31:58.019: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7189 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 28 14:31:58.121: INFO: rc: 1 Apr 28 14:31:58.121: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7189 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002510090 exit status 1 true [0xc002314000 0xc002314048 0xc002314060] [0xc002314000 0xc002314048 0xc002314060] [0xc002314040 0xc002314058] [0xba70e0 0xba70e0] 0xc002312cc0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 28 14:32:08.122: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7189 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 28 14:32:08.226: INFO: rc: 1 Apr 28 14:32:08.226: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7189 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0035300f0 exit status 1 true [0xc0028c8000 0xc0028c8018 0xc0028c8030] [0xc0028c8000 0xc0028c8018 0xc0028c8030] [0xc0028c8010 0xc0028c8028] [0xba70e0 0xba70e0] 0xc0023463c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 28 14:32:18.226: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7189 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 28 14:32:18.340: INFO: rc: 1 Apr 28 14:32:18.340: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: Apr 28 14:32:18.340: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Apr 28 14:32:18.352: INFO: Deleting all statefulset in ns statefulset-7189 Apr 28 14:32:18.353: INFO: Scaling statefulset ss to 0 Apr 28 14:32:18.358: INFO: Waiting for statefulset status.replicas updated to 0 Apr 28 14:32:18.360: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 14:32:18.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-7189" for this suite. Apr 28 14:32:24.410: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 14:32:24.489: INFO: namespace statefulset-7189 deletion completed in 6.099436517s • [SLOW TEST:374.088 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 14:32:24.489: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 28 14:32:29.746: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 14:32:29.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-503" for this suite. Apr 28 14:32:35.838: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 14:32:35.913: INFO: namespace container-runtime-503 deletion completed in 6.125289431s • [SLOW TEST:11.424 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 14:32:35.913: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-de672fa2-3236-49a8-9783-693d29fd4f3f STEP: Creating a pod to test consume configMaps Apr 28 14:32:35.978: INFO: Waiting up to 5m0s for pod "pod-configmaps-2373a3d0-4043-44ed-8e1d-6460e153899a" in namespace "configmap-5702" to be "success or failure" Apr 28 14:32:35.981: INFO: Pod "pod-configmaps-2373a3d0-4043-44ed-8e1d-6460e153899a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.60963ms Apr 28 14:32:37.989: INFO: Pod "pod-configmaps-2373a3d0-4043-44ed-8e1d-6460e153899a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011569287s Apr 28 14:32:39.994: INFO: Pod "pod-configmaps-2373a3d0-4043-44ed-8e1d-6460e153899a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016313502s STEP: Saw pod success Apr 28 14:32:39.994: INFO: Pod "pod-configmaps-2373a3d0-4043-44ed-8e1d-6460e153899a" satisfied condition "success or failure" Apr 28 14:32:39.998: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-2373a3d0-4043-44ed-8e1d-6460e153899a container configmap-volume-test: STEP: delete the pod Apr 28 14:32:40.035: INFO: Waiting for pod pod-configmaps-2373a3d0-4043-44ed-8e1d-6460e153899a to disappear Apr 28 14:32:40.048: INFO: Pod pod-configmaps-2373a3d0-4043-44ed-8e1d-6460e153899a no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 14:32:40.048: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5702" for this suite. Apr 28 14:32:46.064: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 14:32:46.139: INFO: namespace configmap-5702 deletion completed in 6.088040346s • [SLOW TEST:10.226 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 14:32:46.139: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5751.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-5751.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5751.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-5751.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 28 14:32:52.320: INFO: DNS probes using dns-test-77b0aaea-4dba-4729-af5b-1e2be10843ec succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5751.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-5751.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5751.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-5751.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 28 14:32:58.452: INFO: File wheezy_udp@dns-test-service-3.dns-5751.svc.cluster.local from pod dns-5751/dns-test-ab462a33-3b12-4ea8-ba33-e2a2bef24599 contains '' instead of 'bar.example.com.' Apr 28 14:32:58.455: INFO: Lookups using dns-5751/dns-test-ab462a33-3b12-4ea8-ba33-e2a2bef24599 failed for: [wheezy_udp@dns-test-service-3.dns-5751.svc.cluster.local] Apr 28 14:33:03.461: INFO: File wheezy_udp@dns-test-service-3.dns-5751.svc.cluster.local from pod dns-5751/dns-test-ab462a33-3b12-4ea8-ba33-e2a2bef24599 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 28 14:33:03.465: INFO: File jessie_udp@dns-test-service-3.dns-5751.svc.cluster.local from pod dns-5751/dns-test-ab462a33-3b12-4ea8-ba33-e2a2bef24599 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 28 14:33:03.465: INFO: Lookups using dns-5751/dns-test-ab462a33-3b12-4ea8-ba33-e2a2bef24599 failed for: [wheezy_udp@dns-test-service-3.dns-5751.svc.cluster.local jessie_udp@dns-test-service-3.dns-5751.svc.cluster.local] Apr 28 14:33:08.460: INFO: File wheezy_udp@dns-test-service-3.dns-5751.svc.cluster.local from pod dns-5751/dns-test-ab462a33-3b12-4ea8-ba33-e2a2bef24599 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 28 14:33:08.464: INFO: Lookups using dns-5751/dns-test-ab462a33-3b12-4ea8-ba33-e2a2bef24599 failed for: [wheezy_udp@dns-test-service-3.dns-5751.svc.cluster.local] Apr 28 14:33:13.460: INFO: File wheezy_udp@dns-test-service-3.dns-5751.svc.cluster.local from pod dns-5751/dns-test-ab462a33-3b12-4ea8-ba33-e2a2bef24599 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 28 14:33:13.464: INFO: Lookups using dns-5751/dns-test-ab462a33-3b12-4ea8-ba33-e2a2bef24599 failed for: [wheezy_udp@dns-test-service-3.dns-5751.svc.cluster.local] Apr 28 14:33:18.461: INFO: File wheezy_udp@dns-test-service-3.dns-5751.svc.cluster.local from pod dns-5751/dns-test-ab462a33-3b12-4ea8-ba33-e2a2bef24599 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 28 14:33:18.464: INFO: File jessie_udp@dns-test-service-3.dns-5751.svc.cluster.local from pod dns-5751/dns-test-ab462a33-3b12-4ea8-ba33-e2a2bef24599 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 28 14:33:18.464: INFO: Lookups using dns-5751/dns-test-ab462a33-3b12-4ea8-ba33-e2a2bef24599 failed for: [wheezy_udp@dns-test-service-3.dns-5751.svc.cluster.local jessie_udp@dns-test-service-3.dns-5751.svc.cluster.local] Apr 28 14:33:23.462: INFO: DNS probes using dns-test-ab462a33-3b12-4ea8-ba33-e2a2bef24599 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5751.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-5751.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5751.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-5751.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 28 14:33:30.016: INFO: DNS probes using dns-test-f63a5ed0-8585-4272-afb4-8c604c5e3f4e succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 14:33:30.129: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5751" for this suite. Apr 28 14:33:36.160: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 14:33:36.238: INFO: namespace dns-5751 deletion completed in 6.092588981s • [SLOW TEST:50.099 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 14:33:36.239: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating pod Apr 28 14:33:40.311: INFO: Pod pod-hostip-21b6b5bf-12aa-46f4-90c2-64352cbc86c4 has hostIP: 172.17.0.6 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 14:33:40.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7585" for this suite. Apr 28 14:34:02.328: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 14:34:02.405: INFO: namespace pods-7585 deletion completed in 22.090584639s • [SLOW TEST:26.167 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 14:34:02.406: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1685 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Apr 28 14:34:02.441: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-7857' Apr 28 14:34:02.552: INFO: stderr: "" Apr 28 14:34:02.553: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod was created [AfterEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1690 Apr 28 14:34:02.555: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-7857' Apr 28 14:34:11.883: INFO: stderr: "" Apr 28 14:34:11.883: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 14:34:11.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7857" for this suite. Apr 28 14:34:17.899: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 14:34:17.974: INFO: namespace kubectl-7857 deletion completed in 6.084194389s • [SLOW TEST:15.568 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 14:34:17.974: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name secret-emptykey-test-d48b285f-ba04-4087-868d-8d792d5aac1f [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 14:34:18.168: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1338" for this suite. Apr 28 14:34:24.323: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 14:34:24.405: INFO: namespace secrets-1338 deletion completed in 6.15349029s • [SLOW TEST:6.431 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 14:34:24.406: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-04b5cd13-49ad-43cd-bff6-a662a5869116 STEP: Creating a pod to test consume secrets Apr 28 14:34:24.489: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-930fee14-c348-4af0-a03a-010915098334" in namespace "projected-6440" to be "success or failure" Apr 28 14:34:24.508: INFO: Pod "pod-projected-secrets-930fee14-c348-4af0-a03a-010915098334": Phase="Pending", Reason="", readiness=false. Elapsed: 19.190377ms Apr 28 14:34:26.513: INFO: Pod "pod-projected-secrets-930fee14-c348-4af0-a03a-010915098334": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023847575s Apr 28 14:34:28.518: INFO: Pod "pod-projected-secrets-930fee14-c348-4af0-a03a-010915098334": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028467446s STEP: Saw pod success Apr 28 14:34:28.518: INFO: Pod "pod-projected-secrets-930fee14-c348-4af0-a03a-010915098334" satisfied condition "success or failure" Apr 28 14:34:28.521: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-930fee14-c348-4af0-a03a-010915098334 container projected-secret-volume-test: STEP: delete the pod Apr 28 14:34:28.556: INFO: Waiting for pod pod-projected-secrets-930fee14-c348-4af0-a03a-010915098334 to disappear Apr 28 14:34:28.571: INFO: Pod pod-projected-secrets-930fee14-c348-4af0-a03a-010915098334 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 14:34:28.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6440" for this suite. Apr 28 14:34:34.610: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 14:34:34.689: INFO: namespace projected-6440 deletion completed in 6.115338343s • [SLOW TEST:10.284 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 14:34:34.689: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token Apr 28 14:34:35.268: INFO: created pod pod-service-account-defaultsa Apr 28 14:34:35.268: INFO: pod pod-service-account-defaultsa service account token volume mount: true Apr 28 14:34:35.288: INFO: created pod pod-service-account-mountsa Apr 28 14:34:35.289: INFO: pod pod-service-account-mountsa service account token volume mount: true Apr 28 14:34:35.298: INFO: created pod pod-service-account-nomountsa Apr 28 14:34:35.298: INFO: pod pod-service-account-nomountsa service account token volume mount: false Apr 28 14:34:35.345: INFO: created pod pod-service-account-defaultsa-mountspec Apr 28 14:34:35.345: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Apr 28 14:34:35.352: INFO: created pod pod-service-account-mountsa-mountspec Apr 28 14:34:35.352: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Apr 28 14:34:35.379: INFO: created pod pod-service-account-nomountsa-mountspec Apr 28 14:34:35.379: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Apr 28 14:34:35.512: INFO: created pod pod-service-account-defaultsa-nomountspec Apr 28 14:34:35.512: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Apr 28 14:34:35.516: INFO: created pod pod-service-account-mountsa-nomountspec Apr 28 14:34:35.517: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Apr 28 14:34:35.525: INFO: created pod pod-service-account-nomountsa-nomountspec Apr 28 14:34:35.525: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 14:34:35.525: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-7671" for this suite. Apr 28 14:35:05.696: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 14:35:05.773: INFO: namespace svcaccounts-7671 deletion completed in 30.203907167s • [SLOW TEST:31.084 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 14:35:05.773: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:167 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating server pod server in namespace prestop-1895 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-1895 STEP: Deleting pre-stop pod Apr 28 14:35:18.900: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 14:35:18.907: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-1895" for this suite. Apr 28 14:35:56.977: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 14:35:57.068: INFO: namespace prestop-1895 deletion completed in 38.155742494s • [SLOW TEST:51.295 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 14:35:57.068: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 28 14:35:57.121: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 14:36:01.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9239" for this suite. Apr 28 14:36:43.205: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 14:36:43.279: INFO: namespace pods-9239 deletion completed in 42.092540703s • [SLOW TEST:46.211 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 14:36:43.279: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-8f172d00-9d03-4ef6-b5a2-f67154291fe7 in namespace container-probe-6195 Apr 28 14:36:47.402: INFO: Started pod busybox-8f172d00-9d03-4ef6-b5a2-f67154291fe7 in namespace container-probe-6195 STEP: checking the pod's current state and verifying that restartCount is present Apr 28 14:36:47.406: INFO: Initial restart count of pod busybox-8f172d00-9d03-4ef6-b5a2-f67154291fe7 is 0 Apr 28 14:37:37.564: INFO: Restart count of pod container-probe-6195/busybox-8f172d00-9d03-4ef6-b5a2-f67154291fe7 is now 1 (50.158848043s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 14:37:37.577: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6195" for this suite. Apr 28 14:37:43.631: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 14:37:43.710: INFO: namespace container-probe-6195 deletion completed in 6.12068856s • [SLOW TEST:60.431 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 14:37:43.710: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 28 14:37:47.805: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 14:37:47.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-4050" for this suite. Apr 28 14:37:53.884: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 14:37:53.964: INFO: namespace container-runtime-4050 deletion completed in 6.089697098s • [SLOW TEST:10.254 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 14:37:53.965: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 28 14:37:54.026: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Apr 28 14:37:54.184: INFO: stderr: "" Apr 28 14:37:54.184: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.11\", GitCommit:\"d94a81c724ea8e1ccc9002d89b7fe81d58f89ede\", GitTreeState:\"clean\", BuildDate:\"2020-04-05T10:39:42Z\", GoVersion:\"go1.12.14\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.7\", GitCommit:\"6c143d35bb11d74970e7bc0b6c45b6bfdffc0bd4\", GitTreeState:\"clean\", BuildDate:\"2020-01-14T00:28:37Z\", GoVersion:\"go1.12.12\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 14:37:54.184: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3258" for this suite. Apr 28 14:38:00.207: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 14:38:00.288: INFO: namespace kubectl-3258 deletion completed in 6.09780077s • [SLOW TEST:6.322 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl version /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 14:38:00.288: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override arguments Apr 28 14:38:00.349: INFO: Waiting up to 5m0s for pod "client-containers-38305e47-e531-4d99-8162-a9ef039fea1a" in namespace "containers-9178" to be "success or failure" Apr 28 14:38:00.352: INFO: Pod "client-containers-38305e47-e531-4d99-8162-a9ef039fea1a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.50513ms Apr 28 14:38:02.356: INFO: Pod "client-containers-38305e47-e531-4d99-8162-a9ef039fea1a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006814356s Apr 28 14:38:04.359: INFO: Pod "client-containers-38305e47-e531-4d99-8162-a9ef039fea1a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010509786s STEP: Saw pod success Apr 28 14:38:04.359: INFO: Pod "client-containers-38305e47-e531-4d99-8162-a9ef039fea1a" satisfied condition "success or failure" Apr 28 14:38:04.362: INFO: Trying to get logs from node iruya-worker pod client-containers-38305e47-e531-4d99-8162-a9ef039fea1a container test-container: STEP: delete the pod Apr 28 14:38:04.384: INFO: Waiting for pod client-containers-38305e47-e531-4d99-8162-a9ef039fea1a to disappear Apr 28 14:38:04.388: INFO: Pod client-containers-38305e47-e531-4d99-8162-a9ef039fea1a no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 14:38:04.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-9178" for this suite. Apr 28 14:38:10.404: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 14:38:10.482: INFO: namespace containers-9178 deletion completed in 6.090132419s • [SLOW TEST:10.194 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 28 14:38:10.482: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 28 14:38:14.577: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 28 14:38:14.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-8192" for this suite. Apr 28 14:38:20.621: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 28 14:38:20.708: INFO: namespace container-runtime-8192 deletion completed in 6.109561226s • [SLOW TEST:10.226 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SApr 28 14:38:20.708: INFO: Running AfterSuite actions on all nodes Apr 28 14:38:20.708: INFO: Running AfterSuite actions on node 1 Apr 28 14:38:20.708: INFO: Skipping dumping logs from cluster Ran 215 of 4412 Specs in 6146.433 seconds SUCCESS! -- 215 Passed | 0 Failed | 0 Pending | 4197 Skipped PASS